The post L’appel du 19 juin appeared first on Bertrand Meyer's technology+ blog.

]]>*Françaises, Français, mes chers compatriotes,*

*Je voulais vous parler hier mais on m’a dit que l’expression “appel du 18 juin” était déjà prise et j’ai décidé d’attendre jusqu’aujourd’hui, 19 juin 2020. Un jour ne devrait pas faire une grande différence.*

*Mon message à toutes et à tous est simple : *

Ne partez pas en vacances cette année.

*Je sais, c’est dur. Pour les Français, les vacances sont sacrées depuis 1936. Toute l’année vous parlez essentiellement des dernières et des prochaines vacances. Mais cette fois-ci ce n’est vraiment pas le moment. Même si vous ne partez pas à l’étranger, avec la meilleure volonté du monde vous allez quand même vous entasser sur les plages et dans les hôtels. Vous essayerez le masque mais il suffit de se promener dans la rue pour voir combien sont au-dessous du nez ou au-dessus de la bouche ou les deux, ne servant strictement à rien.*

*Si nous partons comme tous les ans, imaginez la situation qui s’ensuivra inéluctablement. Projetez-vous quelques mois en avant ; le 28 octobre, pour choisir une date au hasard. Êtes-vous prêts pour 35 000 cas et 240 décès par jour, en croissance sans fin prévisible ? Pour un retour de l’engorgement des hôpitaux ? Pour — j’hésite à prononcer le mot honni ! — un nouveau reconfinement, celui que nous avions promis d’éviter mais qui serait devenu inévitable ? Et tout ce qui en découle — faillites, licenciements, séparations ? Sans même mentionner des fêtes de fin d’année sinistres sans la moindre lumière au bout du tunnel.*

*Non, j’en suis sûr, tout cela est impensable et n’est pas ce que vous voulez.*

*Alors, sacrifiez vos vacances cette année pour ne pas avoir à sacrifier bien plus les mois et les années qui suivront. Restez chez vous. Économisez votre argent, ne serait-ce que pour vous offrir d’excellentes vacances l’année prochaine. Lisez des livres, regardez des films, faites votre gymnastique, mais évitez déplacements et rencontres. Arrêtons-nous pour mieux rebondir ensuite. Si vous travaillez dans le tourisme, la passe sera difficile, et l’État vous aidera, mais céder à la facilité ne ferait que rendre vos perspectives pires encore.*

*En ces derniers jours de printemps, où tout semble vous sourire, vous n’en avez peut-être pas entièrement conscience encore, mais bientôt la bise sera venue : ne risquez pas, pour le bref plaisir d’un bel été, un automne et un hiver pires que ce que notre époque a jamais connu.*

*Pas de départ en vacances à l’été 2020.*

Pour ma part, je ne m’en souviens pas.

D’autant plus qu’il n’a jamais été prononcé.

The post L’appel du 19 juin appeared first on Bertrand Meyer's technology+ blog.

]]>The post Things to do to an algorithm appeared first on Bertrand Meyer's technology+ blog.

]]>You can *learn *an algorithm. Discovering classical algorithms is a large part of the Bildungsroman of a computer scientist. Sorting algorithms, graph algorithms, parsing algorithms, numerical algorithms, matrix algorithms, graphical algorithms…

You can *teach *an algorithm. Whether a professor or just a team leader, you explain to others why the obvious solution is not always the right one. As when I saw that someone had implemented the traversal part of a garbage collection scheme (the “mark” of mark-and-sweep) using a recursive algorithm. Recursion is a great tool, but not here: it needs a stack of unpredictable size, and garbage collection, which you trigger when you run out of space, is not precisely the moment to start wildly allocating memory. In comes the Deutsch-Schorr-Waite algorithm, which improbably (as if tightrope-walking) subverts the structure itself to find its way forth and back.

To teach it, you can *dance* an algorithm. Sounds strange, but Informatics Europe gave its 2013 education award to the “AlgoRhythmics” group from at Sapientia University in Romania, which demonstrates algorithms using central-European dances; see their rendering of Merge Sort:

(Their page has more examples. I see that recently they expanded to other kinds of dance and will let you discover binary search as flamenco and backtracking as classical ballet.) More generally you can *simulate* or *animate* an algorithm.

You can *admire *an algorithm. Many indeed are a source of wonder. The inner beauty of topological sort, Levenshtein or AVL can leave no one indifferent.

You can *improve* an algorithm. At least you can try.

You can *invent *an algorithm. Small or large, ambitious or mundane, but not imagined yet by anyone. Devising a new algorithm is a sort of rite of passage in our profession. If it does prove elegant, useful and elegant, you’ll get a real kick (trust me). Then you can *publish* the algorithm.

You can *prove* an algorithm, that is to say, mathematically establish its correctness. It is indeed increasingly unreasonable to publish an algorithm without correctness arguments. Maybe I have an excuse here to advertize for an an article that examines important algorithms across a wide variety of fields and showcases their main claim to correctness: their loop invariants.

You can *implement *an algorithm. That is much of what we do in software engineering, even if as an OO guy I would immediately add “*as part of the associated data structure*”*.*

Of late, algorithms have come to be associated with yet another verb; one that I would definitely not have envisioned when first learning about algorithms in Knuth (the book) and from Knuth (the man *—* who most certainly does *not *use foul language).

You can *fuck* an algorithm.

Thousands of British students marched recently to that slogan:

chants of “fuck the algorithm” as a speaker talks of losing her place at medical school because she was downgraded. pic.twitter.com/P15jpuBscB

— huck (@HUCKmagazine) August 16, 2020

They were demonstrating against a formula (the *Guardian* gives the details) that decided on university admissions. The starting point for these events was a ministerial decision to select students not from their grades at exams (“A-level”), which could not take place because of Covid, but instead from their assessed performance in their schools. So far so good but the authorities decided to calibrate these results with parameters deduced from each school’s past performance. Your grade is no longer your grade: if Jill and Joan both got a B, but Jill’s school has been better at getting students into (say) Oxford in the past, then Jill’s B is worth more than Joan’s B.

The outcry was easy to predict, or should have been for a more savvy government. Students want to be judged by their own performance, not by the results of some other students they do not even know. Arguments that the sole concern was a legimitate one (an effort to compensate for possible grade inflation in some schools) ceased to be credible when it came out that on average the algorithm boosted grades from private schools by 4.7. No theoretical justification was going to be of much comfort anyway to the many students who had been admitted to the universities of their dreams on the basis of their raw grades, and after the adjustment found themselves rejected.

In the end, “Fuck the Algorithm!” worked. The government withdrew the whole scheme; it tried to lay the blame for the fiasco on the regulatory authority (Ofqual), fooling no one.

These U.K. events of August 2020 will mark a turning point in the relationship between computer science and society. Not for the revelation that our technical choices have human consequences; that is old news, even if we often pretend to ignore it. Not for the use of Information Technology as an excuse; it is as old (“*Sorry, the computer does not allow that!*”) as IT itself. What “Fuck the Algorithm!” highlights is the massive danger of the current rush to apply machine learning to everything.

As long as we are talking marketing campaigns (“*customers who bought the product you just ordered also bought …*”) or image recognition, the admiring mood remains appropriate. But now, ever more often, machine learning (usually presented as “Artificial Intelligence” to sound more impressive) gets applied to decisions affecting human lives. In the US, for example, machine-learning algorithms increasingly help judges make decisions, or make the decisions themselves. Following this slippery path is crazy and unethical. It is also dangerous, as the U.K. students’ reaction indicates.

Machine learning does what the name indicates: it reproduces and generalizes the dominant behaviors of the past. The algorithms have no notion of right and wrong; they just learn. When they affect societal issues, the potential for societal disaster is everywhere.

Amid all the enthusiasm generated by the elegant techniques invented by machine-learning pioneers over the last two decades, one barely encounters any serious reservation. Codes of ethics (from ACM and others) have little to contribute.

We should be careful, though. Either we get our act together and define exacting controls on the use of machine learning for matters affecting people’s fates, or we will see a massive rejection of algorithmic technology, the right parts along with the wrong ones.

The British students of the year 2020’s weird summer will not be the last ones to tell us to fuck the algorithm.

*This article was first published in the Communications of the ACM blog.*

The post Things to do to an algorithm appeared first on Bertrand Meyer's technology+ blog.

]]>The post A novel concept for success in science appeared first on Bertrand Meyer's technology+ blog.

]]>We are told to include a “related work” section, but just as important is the *unrelated* work section. For example:

Unrelated work

The following publications have no attested relevance to the topic of this paper but, as pointed out by an anonymous reviewer[2], they are breathtakingly brilliant: [Meyer 1997], [Meyer 2005], [Meyer et al. 2009], [Al et Meyer 2011]. In addition, having taken a look at the composition of the Editorial Board, we would like to point out the pioneering results introduced by [Meyer 2017] and [Meyer 2019].

This insight is shared with the sole selfless purpose of helping the community, particularly young and aspiring researchers.

[1] I did find a 2018 Twitter thread started by Arvind Narayanan, with the insightful (if dejected) observation that “*`**related work’ sections exclusively cite unrelated work*”.

[2] Example only, for the sake of an example, since for my part I actually refuse to be an anonymous reviewer; I always sign my reviews, so if I want to tell the authors “*I think you should cite my such-and-such paper here*” I can do so without any qualms.

The post A novel concept for success in science appeared first on Bertrand Meyer's technology+ blog.

]]>The post New master program at SIT: Webinar tomorrow appeared first on Bertrand Meyer's technology+ blog.

]]>The talks are in the form of a Webinar, starting at 9 AM this Tuesday (9 June). You can find all the details on the corresponding SIT page at here.

The post New master program at SIT: Webinar tomorrow appeared first on Bertrand Meyer's technology+ blog.

]]>The post PhD and postdoc positions in verification in Switzerland appeared first on Bertrand Meyer's technology+ blog.

]]>- Software verification (axiomatic, model-checking, abstract interpretation etc.).
- Advanced techniques of software testing.
- Formal methods, semantics of programming languages, type theory.
- Design by Contract, Eiffel, techniques of correctness-by-construction.
- Cybersecurity.

Compensation at both levels is attractive. The PhD program is conducted in cooperation with partner universities.

Interested candidates should send a CV and relevant documents or links to bm@sit.org. They are also welcome to contact me for details.

The post PhD and postdoc positions in verification in Switzerland appeared first on Bertrand Meyer's technology+ blog.

]]>The post Which one is better? Please answer this poll appeared first on Bertrand Meyer's technology+ blog.

]]>

Any mathematician worthy of the name has known such states of lucid exaltation, when ideas magically fall into place. Poincaré, in a famous passage, described how he experienced such a moment when he discovered fuchsian functions. Of these states, Gauss reportedly saidprocreare jucundum(to procreate is a pleasure), while adding:sed parturire molestum(but giving birth is a pain). Unlike sexual pleasure, this one can last hours or even days. Whoever has experienced it wants to renew it, but is impotent to provoke it, except at best through obstinate work, of which it then appears as the reward.

This penetrating observation (if I may use the expression) brings a new perspective to the 18th-century French song famously revived by Joan Baez:

“The joy of love lasts but an instant”, it says, “the pain of love lasts a lifetime” (p*laisir d’amour ne dure qu’un moment, chagrin d’amour dure toute la vie*). Clearly, the second part of the verse contains an error: “*chagrin d’amour*” (the pain of love) must have been a transcription mistake for “*comprendre la démonstration par Paul Cohen de l’indépendance de l’hypothèse du continu*” (understanding Paul Cohen’s proof of the independence of the continuum hypothesis, which unlike most pains of love does take a lifetime, or two). It is not hard how the confusion could have arisen, as both sound French.

Still, Weil’s observation, if true, is worrying. As everyone can witness almost daily, mathematical ability in humankind is progressing by leaps and bounds. Does this trend portend bad news (echoes of Tolstoi’s *Kreutzer Sonata*) for our collective reproductive future?

A grave question indeed. To help address it, I have started a scientific poll, which you will find here. The question it asks is of the utmost importance: do you prefer sex or math? To preserve the integrity of the study, please note that by answering the poll you are declaring that you have engaged in both sex and math, although not necessarily at the same time.

As soon as I get the research grant for which I have applied, I will submit the results to the Proceedings of the National Academy of Sciences.

[1] André Weil, *Souvenirs d’Apprentissage*, Birkhäuser, Boston, 1991. English translation: *The Apprenticeship of a Mathematician*, Springer, 1992.

The post Which one is better? Please answer this poll appeared first on Bertrand Meyer's technology+ blog.

]]>The post The fool wants nothing appeared first on Bertrand Meyer's technology+ blog.

]]>An effect, of course, which has never been more visible than today (just watch the news).

Against Pedro, who narrates his travels and travails, the dialog sets two other characters, friends from his youth. They serve both as foils for Pedro, enabling his cleverness to shine — they are themselves not the brightest candles on the cake —, and as the embodiment of conventional wisdom. He occasionally gets really impatient with them, although always friendly, and at some point cites this ditty that he remembers from his youth in Spain:

Blind people want to see

The deaf man wants to hear

The fat man wants to slim down

The lame man wants to run.

For the fool there is no remedy:

Since he fancies that he knows

He does not care to learn more.

Wow!

The post The fool wants nothing appeared first on Bertrand Meyer's technology+ blog.

]]>The post A retort that we could use appeared first on Bertrand Meyer's technology+ blog.

]]>I am reading (in translation) the *Viaje de Turquia*, or Turkish Voyage — literaly, Voyage of Turkey — a 16th-century epic dialog, whose authorship is disputed. It is a precious source of information on the period and rings throughout like a true story. The hero, Pedro, tells of his time as a prisoner of war of the Turks and the ignominies he had to suffer for years. He is a doctor, if a self-taught one, and has cured many members of the Pasha’s entourage, but at some point the Pasha, out of spite, sends him back to the harshest form of manual labor. One of his former patients, rich and high-ranked, spots him, the intellectual struggling to move heavy materials in the dirt and under the whip, and mocks him:

Hey, all the philosophy of Aristotle and Plato, all the medical science of Galen, all the eloquence of Cicero and Demosthenes, how have they helped you?

To which Pedro, having put his sack on his shoulder and wiped the tears caused by this pique, answers, looking him straight in the eye:

They have helped me live through days like this one.

Pretty good, I thought. Not just the sense of repartee, but the sentiment itself (echoing by the comments of many a mistreated intellectual in later ages including ours).

Not only that, but it worked, at least for a while. So astounded was the persecutor by the retort, that he took Pedro’s sack to carry it himself, and convinced the Pasha to relieve Pedro from hard work and give him money.

The post A retort that we could use appeared first on Bertrand Meyer's technology+ blog.

]]>The post How to protect from the coronavirus appeared first on Bertrand Meyer's technology+ blog.

]]>It is not a mask.

It is two masks. You wear a mask, I wear a mask.

Many people still believe that they can only get the virus if an infected person coughs or sneezes on them. This is a tragic myth. Droplets are carried by breath in conversation, by food particles from someone eating near you, or simply by air flowing your way.

Anyone today who goes out without wearing a mask is irresponsible (or suicidal, but that is not an excuse, since he harms others too). Your mask is not enough, though. I must wear one too.

Then we are safe from each other. Remember: we do not have definitive figures, but at least one carrier in five is asymptomatic.

Everything else (and I am not even considering quack solutions and unproven treatments) is pointless. Disinfectant (or better soap) helps, but as a complement. Gloves help medical professionals, who know how to use them properly, but for the general public they can do more harm than good: look at people in shops, once they have gloves they touch everything, moving the virus everywhere. Testing will be critical, of course, but here is another sobering statistics: while there are no false positives (if you test positive, you are infected), around 20% of negative tests are wrong (people have the virus, and it is not detected).

I know: in many places, including some the most technologically advanced nations on earth, there are no masks to be found. This may be the greatest scandal of the modern era. But in the meantime makeshift masks are an acceptable palliative. There are guides all over the web as to how to make them, and if nothing else is available a tightly bound scarf or equivalent, cleaned thoroughly and regularly, will do.

Wear a mask and tell others to do the same.

The post How to protect from the coronavirus appeared first on Bertrand Meyer's technology+ blog.

]]>The post Free tutoring for children appeared first on Bertrand Meyer's technology+ blog.

]]>The post Free tutoring for children appeared first on Bertrand Meyer's technology+ blog.

]]>The post Getting a program right, in nine episodes appeared first on Bertrand Meyer's technology+ blog.

]]>_{I was too lazy so far to collect all the references into a single list, so numbers such as [1] refer to the list at the end of the corresponding section.}

A colleague recently asked me to present a short overview of axiomatic semantics as a guest lecture in one of his courses. I have been teaching courses on software verification for a long time (see e.g. here), so I have plenty of material; but instead of just reusing it, I decided to spend a bit of time on explaining why it is good to have a systematic approach to software verification. Here is the resulting tutorial.

Say “software verification” to software professionals, or computer science students outside of a few elite departments, and most of them will think “testing”. In a job interview, for example, show a loop-based algorithm to a programmer and ask “how would you verify it?”: most will start talking about devising clever test cases.

Far from me to berate testing [1]; in fact, I have always thought that the inevitable Dijkstra quote about testing — that it can only show the presence of errors, not their absence [2] — which everyone seems to take as an indictment and dismissal of testing (and which its author probably intended that way) is actually a fantastic advertisement *for *testing: a way to find bugs? Yes! Great! Where do I get it? But that is not the same as verifying the software, which means attempting to ascertain that it has *no *bugs.

Until listeners realize that verification cannot just mean testing, the best course material on axiomatic semantics or other proof techniques will not attract any interest. In fact, there is somewhere a video of a talk by the great testing and public-speaking guru James Whittaker where he starts by telling his audience not to worry, this won’t be a standard boring lecture, he will not start talking about loop invariants [3]! (Loop invariants are coming in this article, in fact they are one of its central concepts, but in later sections only, so don’t bring the sleeping bags yet.) I decided to start my lecture by giving an example of what happens when you do *not *use proper verification. More than one example, in fact, as you will see.

A warning about this article: there is nothing new here. I am using an example from my 1990 book *Introduction to the Theory of Programming Languages *(exercise 9.12). Going even further back, a 1983 “Programming Pearls” *Communications of the ACM* article by Jon Bentley [4] addresses the same example with the same basic ideas. Yet almost forty years later these ideas are still not widely known among practitioners. So consider these articles as yet another tutorial on fundamental software engineering stuff.

The tutorial is a quiz. We start with a program text:

**from**

i := 1 ; j := n — **Result **initialized to 0.

**until **i = j **loop**

m := (i + j) // 2 — Integer division

**if **t [m] ≤ x **then **i := m **else **j := m **end**

**end**

**if **x = t [i] **then **Result := i **end**

All variables are of integer type. t is an up-sorted array of integers, indexed from 1 to n . We do not let any notation get between friends. A loop **from **p **until **e **loop **q **end **executes p then, repeatedly: stops if e (the exit condition) is true, otherwise executes q. (Like {p ; **while not **e **do** {q}} in some other notations.) “:=” is assignment, “=” equality testing. “//” is integer division, e.g. 6 //3 = 7 //3 = 2. **Result **is the name of a special variable whose final value will be returned by this computation (as part of a function, but we only look at the body). **Result **is automatically initialized to zero like all integer variables, so if execution does not assign anything to **Result **the function will return zero.

First question: what is this program trying to do?

OK, this is not the real quiz. I assume you know the answer: it is an attempt at “binary search”, which finds an element in the array, or determines its absence, in a sequence of about log_{2} (n) steps, rather than n if we were use sequential search. (Remember we assume the array is sorted.) **Result** should give us a position where x appears in the array, if it does, and otherwise be zero.

Now for the real quiz: does this program meet this goal?

The answer should be either yes or no. (If no, I am not asking for a correct version, at least not yet, and in any case you can find some in the literature.) The situation is very non-symmetric, we might say Popperian:

- To justify a no answer it suffices of a
*single*example, a particular array t and a particular value x, for which the program fails to set**Result**as it should. - To justify a yes answer we need to provide a credible argument that for
*every*t and x the program sets**Result**as it should.

[1] The TAP conference series (Tests And Proofs), which Yuri Gurevich and I started, explores the complementarity between the two approaches.

[2] Dijkstra first published his observation in 1969. He did not need consider the case of infinite input sets: even for a trivial finite program that multiplies two 32-bit integers, the number of cases to be examined, 2^{64}, is beyond human reach. More so today with 64-bit integers. Looking at this from a 2020 perspective, we may note that exhaustive testing of a finite set of cases, which Dijkstra dismissed as impossible in practice, is in fact exactly what the respected *model checking* verification technique does; not on the original program, but on a simplified — *abstracted —* version precisely designed to keep the number of cases tractable. Dijkstra’s argument remains valid, of course, for the original program if non-trivial. And model-checking does not get us out of the woods: while we are safe if its “testing” finds no bug, if it does find one we have to ensure that the bug is a property of the original program rather than an artifact of the abstraction process.

[3] It is somewhere on YouTube, although I cannot find it right now.

[4] Jon Bentley:* Programming Pearls: Writing Correct Programs*, in *Communications of the ACM*, vol. 26, no. 12, pp. 1040-1045, December 1983, available for example here.

Was program #1 correct? If so it should yield the correct answer. (An answer is correct if either **Result **is the index in t of an element equal to x, or **Result **= 0 and x does not appear in t.)

This program is *not* correct. To prove that it is not correct it suffices of a single example (test case) for which the program does not “*yield the correct answer*”. Assume x = 1 and the array t has two elements both equal to zero (n = 2, remember that arrays are indexed from 1):

t = [0 0]

The successive values of the variables and expressions are:

m i j i + j + 1

After initialization: 1 2 3

i ≠ j, so enter loop: 1 1 2 6 — First branch of “if” since t [1] ≤ x

— so i gets assigned the value of m

But then neither of the values of i and j has changed, so the loop will repeat its body identically (taking the first branch) forever. It is not even that the program yields an incorrect answer: it does not yield an answer at all!

Note (in reference to the famous Dijkstra quote mentioned in the first article), that while it is common to pit tests against proofs, *a test can actually be a proof*: a test that fails is a proof that the program is incorrect. As valid as the most complex mathematical proof. It may not be the kind of proof we like most (our customers tend to prefer a guarantee that the program is correct), but it is a proof all right.

We are now ready for the second attempt:

— Program attempt #2.

**from**

i := 1 ; j := n

**until **i = j **or** **Result** > 0** loop**

m := (i + j) // 2 — Integer division

**if **t [m] ≤ x **then**

i := m + 1

**elseif **t [m] = x **then**

**Result **:= m

**else**** ** — In this case t [m] > x

j := m – 1

**end**

**end**

Unlike the previous one this version always changes i or j, so we may hope it does not loop forever. It has a nice symmetry between i and j.

Same question as before: does this program meet its goal?

The question about program #2, as about program #1: was: it right?

Again no. A trivial example disproves it: n = 1, the array t contains a single element t [1] = 0, x = 0. Then the initialization sets both i and j to 1, i = j holds on entry to the loop which stops immediately, but **Result **is zero whereas it should be 1 (the place where x appears).

Here now is attempt #3, let us see it if fares better:

— Program attempt #3.

**from**

i := 1 ; j := n

**until **i = j **loop**

m := (i + j + 1) // 2

**if **t [m] ≤ x **then**

i := m + 1

**else**

j := m

**end**

**end**

**if **1 ≤ i **and **i ≤ n **then Result **:= i **end
** — If not,

What about this one?

The first two program attempts were wrong. What about the third?

I know, you have every right to be upset at me, but the answer is no once more.

Consider a two-element array t = [0 0] (so n = 2, remember that our arrays are indexed from 1 by convention) and a search value x = 1. The successive values of the variables and expressions are:

m i j i + j + 1

After initialization: 1 2 4

i ≠ j, so enter loop: 2 3 2 6 — First branch of “if” since t [2] < x

i ≠ j, enter loop again: 3 ⚠ — Out-of-bounds memory access!

— (trying to access non-existent t [3])

Oops!

Note that we could hope to get rid of the array overflow by initializing i to 0 rather than 1. This variant (version #3′) is left as a bonus question to the patient reader. (*Hint*: it is also not correct. Find a counter-example.)

OK, this has to end at some point. What about the following version (#4): is it right?

— Program attempt #4.

**from**

i := 0 ; j := n + 1

**until **i = j **loop**

m := (i + j) // 2

**if **t [m] ≤ x **then**

i := m + 1

**else**

j := m

**end**

**end**

**if **1 ≤ i **and **i ≤ n **then Result **:= i **end**

Yes, I know, this is dragging on. But that’s part of the idea: witnessing how hard it is to get a program right if you just judging by the seat of your pants. Maybe we can get it right this time?

Are we there yet? Is program attempt #4 finally correct?

Sorry to disappoint, but no. Consider a two-element array t = [0 0], so n = 2, and a search value x = 1 (yes, same counter-example as last time, although here we could also use x = 0). The successive values of the variables and expressions are:

m i j i + j

After initialization: 0 3 3

i ≠ j, so enter loop: 1 2 3 5 — First branch of “**if**”

i ≠ j, enter loop again: 2 3 3 6 — First branch again

i = j, exit loop

The condition of the final “**if**” is true, so **Result **gets the value 3. This is quite wrong, since there is no element at position 3, and in any case x does not appear in t.

But we are so close! Something *like *this *should *work, should it not?

So patience, patience, let us tweak it just one trifle more, OK?

— Program attempt #5.

**from**

i := 0 ; j := n

**until **i ≥ j **or Result** > 0 **loop**

m := (i + j) // 2

**if **t [m] < x **then**

i := m + 1

**elseif ** t [m] > x** then**

j := m

**else**

**Result **:= m

**end**

**end**

Does it work now?

The question about program #5 was the same as before: is it right, is it wrong?

Well, I know you are growing more upset at me with each section, but the answer is still that this program is wrong. But the way it is wrong is somewhat specific; and it applies, in fact, to all previous variants as well.

This particular wrongness (fancy word for “bug”) has a history. As I pointed out in the first article, there is a long tradition of using binary search to illustrate software correctness issues. A number of versions were published and proved correct, including one in the justly admired *Programming Pearls* series by Jon Bentley. Then in 2006 Joshua Bloch, then at Google, published a now legendary blog article [2] which showed that all these versions suffered from a major flaw: to obtain m, the approximate mid-point between i and j, they compute

(i + j) // 2

which, working on computer integers rather than mathematical integers, might overflow! This in a situation in which both i and j, and hence m as well, are well within the range of the computer’s representable integers, 2^{-n} to 2^{n} (give or take 1) where n is typically 31 or, these days, 63, so that there is no conceptual justification for the overflow.

In the specification that I have used for this article, i starts at 1, so the problem will only arise for an array that occupies half of the memory or more, which is a rather extreme case (but still should be handled properly). In the general case, it is often useful to use arrays with arbitrary bounds (as in Eiffel), so we can have even a small array, with high indices, for which the computation will produce an overflow and bad results.

The Bloch gotcha is a stark reminder that in considering the correctness of programs we must include all relevant aspects and consider programs as they are executed on a real computer, not as we wish they were executed in an ideal model world.

(Note that Jon Bentley alluded to this requirement in his original article: while he did not explicitly mention integer overflow, he felt it necessary to complement his proof by the comment that that “*As laborious as our proof of binary search was, it is still unfinished by some standards. How would you prove that the program is free of runtime errors (such as division by zero, word overflow, or array indices out of bounds)?*” Prescient words!)

It is easy to correct the potential arithmetic overflow bug: instead of (i + j) // 2, Bloch suggested we compute the average as

i + (j – i) // 2

which is the same from a mathematician’s viewpoint, and indeed will compute the same value if both variants compute one, but will not overflow if both i and j are within range.

So we are ready for version 6, which is the same as version 5 save for that single change:

— Program attempt #6.

**from**

i := 0 ; j := n

**until **i ≥ j **or Result** > 0 **loop**

m :=** i + ****(****j – i****) // 2**

**if **t [m] < x **then**

i := m + 1

**elseif ** t [m] > x** then**

j := m

**else**

**Result **:= m

**end**

**end**

Now is probably the right time to recall the words by which Donald Knuth introduces binary search in the original 1973 tome on *Sorting and Searching* of his seminal book series* The Art of Computer Programming*:

Although the basic idea of binary search is comparatively straightforward, the details can be somewhat tricky, and many good programmers have done it wrong the first few times they tried.

Do you need more convincing? Be careful what you answer, I have more variants up my sleeve and can come up with many more almost-right-but-actually-wrong program attempts if you nudge me. But OK, even the best things have an end. This is not the last section yet, but that was the last program attempt. To the naturally following next question in this running quiz, “*is version 6 right or wrong*”, I can provide the answer: it is, to the best of my knowledge, a correct program. Yes! [3].

But the quiz continues. Since answers to the previous questions were all that the programs ** were not** correct, it sufficed in each case to find one case for which the program did not behave as expected. Our next question is of a different nature: can you find an argument why version #6

[1] (In particular) Jon Bentley: *Programming Pearls — Writing Correct Programs*, in *Communications of the ACM*, vol. 26, no. 12, December 1983, pages 1040-1045, available here.

[2] Joshua Bloch:* Extra, Extra — Read All About It: Nearly All Binary Searches and Mergesorts are Broken*, blog post, on the Google AI Blog, 2 June 2006, available here.

[3] A caveat: the program is correct barring any typos or copy-paste errors — I am starting from rigorously verified programs (see the next posts), but the blogging system’s UI and text processing facilities are not the best possible for entering precise technical text such as code. However carefully I check, I cannot rule out a clerical mistake, which of course would be corrected as soon as it is identified.

Preceding sections presented candidate binary search algorithms and asked whether they are correct. “Correct” means something quite precise: that for an array t and a value x, the final value of the variable **Result **is a valid index of t (that is to say, is between 1 and n, the size of t) if and only if x appears at that index in t.

The last section boldly stated that program attempt #6 was correct. The question was: why?

In the case of the preceding versions, which were incorrect, you could prove that property, and I do mean prove, simply by exhibiting a single counter-example: a single t and x for which the program does not correctly set **Result**. Now that I asserting the program to be correct, one example, or a million examples, do not suffice. In fact they are almost irrelevant. Test as much as you like and get correct results every time, you cannot get rid of the gnawing fear that if you had just tested one more time after the millionth test you would have produced a failure. Since the set of possible tests is infinite there is no solution in sight [1].

We need a proof.

I am going to explain that proof in the next section, but before that I would like to give you an opportunity to look at the proof by yourself. I wrote in one of the earlier articles that most of what I have to say was already present in Jon Bentley’s 1983 *Programming Pearls* contribution [2], but a dramatic change did occur in the four decades since: the appearance of automated proof system that can handle significant, realistic programs. One such system, AutoProof, was developed at the Chair of Software engineering at ETH Zurich [3] (key project members were Carlo Furia, Martin Nordio, Nadia Polikarpova and Julian Tschannen, with initial contributions by Bernd Schoeller) on the basis of the Boogie proof technology from Microsoft Research).

AutoProof is available for online use, and it turns out that one of the basic tutorial examples is binary search. You can go to the corresponding page and run the proof.

I am going to let you try this out (and, if you are curious, other online AutoProof examples as well) without too many explanations; those will come in the next section. Let me simply name the basic proof technique: **loop invariant**. A loop invariant is a property INV associated with a loop, such that:

**A**. After the loop’s initialization, INV will hold.**B**.*One*execution of the loop’s body, if started with INV satisfied (and the loop’s exit condition*not*satisfied, otherwise we wouldn’t be executing the body!), satisfies INV again when it terminates.

This idea is of course the same as that of a proof by induction in mathematics: the initialization corresponds to the base step (proving that P (0) holds) and the body property to the induction step (proving that from P (n) follows P (n + 1). With a traditional induction proof we deduce that the property (P (n)) holds for all integers. For the loop, we deduce that when the loop finishes its execution:

- The invariant still holds, since executing the loop means executing the initialization once then the loop body zero or more times.
- And of course the exit condition also holds, since otherwise we would still be looping.

That is how we prove the correctness of a loop: the conjunction of the invariant and the exit condition must yield the property that we seek (in the example, the property, stated above of **Result **relative to t and x).

We also need to prove that the loop does terminate. This part involves another concept, the loop’s *variant*, which I will explain in the next section.

For the moment I will not say anything more and let you look at the AutoProof example page (again, you will find it here), run the verification, and read the invariant and other formal elements in the code.

To “run the verification” just click the **Verify **button on the page. Let me emphasize (and emphasize again and again and again) that clicking **Verify** will not run the code. There is no execution engine in AutoProof, and the verification does not use any test cases. It processes the text of the program as it appears on the page and below. It applies mathematical techniques to perform the proof; the core property to be proved is that the proposed loop invariant is indeed invariant (i.e. satisfies properties** A** and **B** above).

The program being proved on the AutoProof example page is version #6 from the last section, with different variable names. So far for brevity I have used short names such as i, j and m but the program on the AutoProof site applies good naming practices with variables called low, up, middle and the like. So here is that version again with the new variable names:

— Program attempt #7 (identical to #6 with different variable names) .

**from**

low := 0 ; up := n

**until **low ≥ up **or Result** > 0 **loop**

middle := low + ((up – low) // 2)

**if **a [middle] < value **then**** **— The array is now called a rather than t

low := middle + 1

**elseif **** a** [middle] > value** then**

up := middle

**else**

**Result **:= middle

**end**

**end**

This is exactly the algorithm text on the AutoProof page, the one that you are invited to let AutoProof verify for you. I wrote “algorithm text” rather than “program text” because the actual program text (in Eiffel) includes **variant** and invariant clauses which do not affect the program’s execution but make the proof possible.

Whether or not these concepts (invariant, variant, program proof) are completely new to you, do try the prover and take a look at the proof-supporting clauses. In the next article I will remove any remaining mystery.

[1] Technically the set of possible [array, value] pairs is finite, but of a size defying human abilities. As I pointed out in the first section, the “model checking” and “abstract interpretation” verification techniques actually attempt to perform an exhaustive test anyway, after drastically reducing the size of the search space. That will be for some other article.

[2] Jon Bentley:* Programming Pearls: Writing Correct Programs*, in *Communications of the ACM*, vol. 26, no. 12, pp. 1040-1045, December 1983, available for example here.

[3] The AutoProof page contains documentations and numerous article references.

The previous section invited you to run the verification on the AutoProof tutorial page dedicated to the example. AutoProof is an automated proof system for programs. This is just a matter of clicking “Verify”, but more importantly, you should read the annotations added to the program text, particularly the loop invariant, which make the verification possible. (To avoid any confusion let me emphasize once more that clicking “Verify” does not run the program, and that no test cases are used; the effect is to run the verifier, which attempts to prove the correctness of the program by working solely on the program text.)

Here is the program text again, reverting for brevity to the shorter identifiers (the version on the AutoProof page has more expressive ones):

**from**

i := 0 ; j := n

**until **i ≥ j **or Result** > 0 **loop**

m := i + (j – i) // 2

**if **t [m] < x **then**

i := m + 1

**elseif ** t [m] > x** then**

j := m

**else**

**Result** := m

**end**

**end**

Let us now see what makes the proof possible. The key property is the loop invariant, which reads

A: 1 ≤ i ≤ j ≤ n + 1

B: 0 ≤ **Result **≤ n

C: ∀ k: 1 .. i –1 | t [k] < x

D: ∀ k: j .. n | t [k] > x

E: (**Result **> 0) ⇒ (t [**Result**] = x)

The notation is slightly different on the Web page to adapt to the Eiffel language as it existed at the time it was produced; in today’s Eiffel you can write the invariant almost as shown above. Long live Unicode, allowing us to use symbols such as ∀ (obtained not by typing them but by using smart completion, e.g. you start typing “forall” and you can select the ∀ symbol that pops up), ⇒ for “**implies**” and many others

Remember that the invariant has to be established by the loop’s initialization and preserved by every iteration. The role of each of its clauses is as follows:

- A: keep the indices in range.
- B: keep the variable
**Result**, whose final value will be returned by the function, in range. - C and D: eliminate index intervals in which we have determined that the sought value, x, does not appear. Before i, array values are smaller; starting at j, they are greater. So these two intervals, 1..i and j..n, cannot contain the sought value. The overall idea of the algorithm (and most other search algorithms) is to extend one of these two intervals, so as to narrow down the remaining part of 1..n where x may appear.
- E: express that as soon as we find a positive (non-zero)
**Result**, its value is an index in the array (see B) where x does appear.

Why is this invariant useful? The answer is that on exit it gives us what we want from the algorithm. The exit condition, recalled above, is

i ≥ j **or Result** > 0

Combined with the invariant, it tells us that on exit one of the following will hold:

**Result**> 0, but then because of E we know that x appears at position**Result**.- i < j, but then A, C and D imply that x does not appear
*anywhere*in t. In that case it cannot be true that**Result**> 0, but then because of B**Result**must be zero.

What AutoProof proves, mechanically, is that under the function’s precondition (that the array is sorted):

- The initialization ensures the invariant.
- The loop body, assuming that the invariant is satisfied but the exit condition is not, ensures the loop invariant again after it executes.
- The combination of the invariant and the exit condition ensures, as just explained, the postcondition of the function (the property that
**Result**will either be positive and the index of an element equal to x, or zero with the guarantee that x appears nowhere in t).

Such a proof guarantees the correctness of the program *if it terminates*. We (and AutoProof) must prove separately that it *does *terminate. The technique is simple: find a “loop ** variant**”, an integer quantity v which remains non-negative throughout the loop (in other words, the loop invariant includes or implies v ≥ 0) and decreases on each iteration, so that the loop cannot continue executing forever. An obvious variant here is j – i + 1 (where the + 1 is needed because j – i may go down to -1 on the last iteration if x does not appear in the array). It reflects the informal idea of the algorithm: repeatedly decrease an interval i .. j – 1 (initially, 1 .. n) guaranteed to be such that x appears in t if and only if it appears at an index in that interval. At the end, either we already found x or the interval is empty, implying that x does not appear at all.

A great reference on variants and the techniques for proving program termination is a *Communications of the ACM* article of 2011: [3].

The variant gives an upper bound on the number of iterations that remain at any time. In sequential search, j – i + 1 would be our best bet; but for binary search it is easy to show that log_{2 }(j – i + 1) is also a variant, extending the proof of correctness with a proof of performance (the key goal of binary search being to ensure a logarithmic rather than linear execution time).

This example is, I hope, enough to highlight the crucial role of loop invariants and loop variants in reasoning about loops. How did we get the invariant? It looks like I pulled it out of a hat. But in fact if we go the other way round (as advocated in classic books [1] [2]) and develop the invariant and the loop together the process unfolds itself naturally and there is nothing mysterious about the invariant.

Here I cannot resist quoting (thirty years on!) from my own book *Introduction to the Theory of Programming Languages* [4]. It has a chapter on axiomatic semantics (also known as *Hoare logic*, the basis for the ideas used in this discussion), which I just made available: see here [5]. Its exercise 9.12 is the starting point for this series of articles. Here is how the book explains how to design the program and the invariant [6]:

In the general case [

of search, binary or not] we aim for a loop body of the formm := ‘‘Some value in 1.. n such that i ≤ m < j’’;

ift [m] ≤ xtheni := m + 1

elsej := m

endIt is essential to get all the details right (and easy to get some wrong):

- The instruction must always decrease the variant j – i, by increasing i or decreasing j. If the the definition of m specified just m ≤ j rather than m < j, the second branch would not meet this goal.
- This does not transpose directly to i: requiring i < m < j would lead to an impossibility when j – i is equal to 1. So we accept i ≤ m but then we must take m + 1, not m, as the new value of i in the first branch.
- The conditional’s guards are tests on t [m], so m must always be in the interval 1 . . n. This follows from the clause 0 ≤ i ≤ j ≤ n + 1 which is part of the invariant.
- If this clause is satisfied, then m ≤ n and m > 0, so the conditional instruction indeed leaves this clause invariant.
- You are invited to check that both branches of the conditional also preserve the rest of the invariant.
- Any policy for choosing m is acceptable if it conforms to the above scheme. Two simple choices are i and j – 1; they lead to variants of the sequential search algorithm [
which the book discussed just before binary search].For binary search, m will be roughly equal to the average of i and j.

“Roughly” because we need an integer, hence the // (integer division).

In the last section, I will reflect further on the lessons we can draw from this example, and the practical significance of the key concept of invariant.

[1] E.W. Dijkstra:* A Discipline of Programming*, Prentice Hall, 1976.

[2] David Gries: *The Science of Programming*, Springer, 1989.

[3] Byron Cook, Andreas Podelski and Andrey Rybalchenko: *Proving program termination*, in *Communications of the ACM*, vol. 54, no. 11, May 2011, pages 88-98, available here.

[4] Bertrand Meyer, *Introduction to the Theory of Programming Languages*, Prentice Hall, 1990. The book is out of print but can be found used, e.g. on Amazon. See the next entry for an electronic version of two chapters.

[5] Bertrand Meyer *Axiomatic semantics*, chapter 9 from [3], available here. Note that the PDF was reconstructed from an old text-processing system (troff); the figures could not be recreated and are missing. (One of these days I might have the patience of scanning them from a book copy and adding them. Unless someone wants to help.) I also put online, with the same caveat, chapter 2 on notations and mathematical basis: see here.

[6] Page 383 of [4] and [5]. The text is verbatim except a slight adaptation of the programming notation and a replacement of the variables: i in the book corresponds to i – 1 here, and j to j – 1. As a matter of fact I prefer the original conventions from the book (purely as a matter of taste, since the two are rigorously equivalent), but I changed here to the conventions of the program as it appears in the AutoProof page, with the obvious advantage that you can verify it mechanically. The text extract is otherwise exactly as in the 1990 book.

What was this journey about?

We started with a succession of attempts that might have “felt right” but were in fact all wrong, each in its own way: giving the wrong answer in some cases, crashing (by trying to access an array outside of its index interval) in some cases, looping forever in some cases. Always “in some cases”, evidencing the limits of testing, which can never guarantee that it exercises all the problem cases. A correct program is one that works in *all *cases. The final version was correct; you were able to prove its correctness with an online tool and then to understand (I hope) what lies behind that proof.

To show how to prove such correctness properties, I have referred throughout the series to publications from the 1990s (my own *Introduction to The Theory of Programming Languages*), the 1980s (Jon Bentley’s *Programming Pearls* columns, Gries’s *Science of Programming*), and even the 1970s (Dijkstra’s *Discipline of Programming*). I noted that the essence of my argument appeared in a different form in one of Bentley’s *Communications* articles. What is the same and what has changed?

The core concepts have been known for a long time and remain applicable: assertion, invariant, variant and a few others, although they are much better understood today thanks to decades of theoretical work to solidify the foundation. Termination also has a more satisfactory theory.

On the practical side, however, the progress has been momentous. Considerable engineering has gone into making sure that the techniques scaled up. At the time of Bentley’s article, binary search was typical of the kind of programs that could be proved correct, and the proof had to proceed manually. Today, we can tackle much bigger programs, and use tools to perform the verification.

Choosing binary search again as an example today has the obvious advantage that everyone can understand all the details, but should not be construed as representative of the state of the art. Today’s proof systems are far more sophisticated. Entire operating systems, for example, have been mechanically (that is to say, through a software tool) proved correct. In the AutoProof case, a major achievement was the proof of correctness [1] of an entire data structure (collections) library, EiffelBase 2. In that case, the challenge was not so much size (about 8,000 source lines of code), but the complexity of both:

- The scope of the verification, involving the full range of mechanisms of a modern object-oriented programming language, with classes, inheritance (single and multiple), polymorphism, dynamic binding, generics, exception handling etc.
- The code itself, using sophisticated data structures and algorithms, involving in particular advanced pointer manipulations.

In both cases, progress has required advances on both the science and engineering sides. For example, the early work on program verification assumed a bare-bones programming language, with assignments, conditionals, loops, routines, and not much more. But real programs use many other constructs, growing ever richer as programming languages develop. To cover exception handling in AutoProof required both theoretical modeling of this construct (which appeared in [2]) and implementation work.

More generally, scaling up verification capabilities from the small examples of 30 years ago to the sophisticated software that can be verified today required the considerable effort of an entire community. AutoProof, for example, sits at the top of a tool stack relying on the Boogie environment from Microsoft Research, itself relying on the Z3 theorem prover. Many person-decades of work make the result possible.

Beyond the tools, the concepts are esssential. One of them, loop invariants, has been illustrated in the final version of our program. I noted in the first article the example of a well-known expert and speaker on testing who found no better way to announce that a video would not be boring than “relax, we are not going to talk about loop invariants.” Funny perhaps, but unfair. Loop invariants are one of the most beautiful concepts of computer science. Not so surprisingly, because loop invariants are the application to programming of the concept of mathematical induction. According to the great mathematician Henri Poincaré, all of mathematics rests on induction; maybe he exaggerated, maybe not, but who would think of teaching mathematics without explaining induction? Teaching programming without explaining loop invariants is no better.

Below is an illustration (if you will accept my psychedelic diagram) of what a loop is about, as a problem-solving technique. Sometimes we can get the solution directly. Sometimes we identify several steps to the solution; then we use a sequence (A ; B; C). Sometimes we can find two (or more) different ways of solving the problem in different cases; then we use a conditional (**if **c **then **A **else **B **end**). And sometimes we can only get a solution by getting closer repeatedly, not necessarily knowing in advance how many times we will have to advance towards it; then, we use a loop.

We identify an often large (i.e. very general) area where we know the solution will lie; we call that area the loop invariant. The solution or solutions (there may be more than one) will have to satisfy a certain condition; we call it the exit condition. From wherever we are, we shoot into the invariant region, using an appropriate operation; we call it the initialization. Then we execute as many times as needed (maybe zero if our first shot was lucky) an operation that gets us closer to that goal; we call it the loop body. To guarantee termination, we must have some kind of upper bound of the distance to the goal, decreasing each time discretely; we call it the loop variant.

This explanation is only an illustration, but I hope it makes the ideas intuitive. The key to a loop is its invariant. As the figure suggests, the invariant is always a generalization of the goal. For example, in binary search (and many other search algorithms, such as sequential search), our goal is to find a * position *where either x appears or, if it does not, we can be sure that it appears nowhere. The invariant says that we have an

An invariant should be:

- Strong enough that we can devise an exit condition which in the end, combined with the invariant, gives us the goal we seek (a solution).
- Weak enough that we can devise an initialization that ensures it (by shooting into the yellow area) easily.
- Tuned so that we can devise a loop body that, from a state satifying the invariant, gets us to a new one that is closer to the goal.

In the example:

- The exit condition is simply that the interval’s length is 1. (Technically, that we have computed
**Result**as the single interval element.) Then from the invariant and the exit condition, we get the goal we want. - Initialization is easy, since we can just take the initial interval to be the whole index range of the array, which trivially satisfies the invariant.
- The loop body simply decreases the length of the interval (which can serve as loop variant to ensure termination). How we decrease the length depends on the search strategy; in sequential search, each iteration decreases the length by 1, correct although not fast, and binary search decreases it by about half.

The general scheme always applies. *Every * loop algorithm is characterized by an invariant. The invariant may be called the DNA of the algorithm.

To demonstrate the relevance of this principle, my colleagues Furia, Velder, and I published a survey paper [6] in *ACM* *Computing Surveys *describing the invariants of important algorithms in many areas of computer science, from search algorithms to sorting (all major algorithms), arithmetic (long integer addition, squaring), optimization and dynamic programming (Knapsack, Levenshtein/Edit distance), computational geometry (rotating calipers), Web (Page Rank)… I find it pleasurable and rewarding to go deeper into the basis of loop algorithms and understand their invariants; like a geologist who does not stop at admiring the mountain, but gets to understand how it came to be.

Such techniques are inevitable if we want to get our programs right, the topic of this article. Even putting aside the Bloch average-computation overflow issue, I started with 5 program attempts, all kind of friendly-looking but wrong in different ways. I could have continued fiddling with the details, following my gut feeling to fix the flaws and running more and more tests. Such an approach can be reasonable in some cases (if you have an algorithm covering a well-known and small set of cases), but will not work for non-trivial algorithms.

Newcomers to the concept of loop invariant sometimes panic: “this is all fine, you gave me the invariants in your examples, how do I find my own invariants for my own loops?” I do not have a magic recipe (nor does anyone else), but there is no reason to be scared. Once you have understood the concept and examined enough examples (just a few of those in [6] should be enough), writing the invariant at the same time as you are devising a loop will come as a second nature to you.

As the fumbling attempts in the first few sections should show, there is not much of an alternative. Try this approach. If you are reaching these final lines after reading what preceded them, allow me to thank you for your patience, and to hope that this rather long chain of reflections on verification will have brought you some new insights into the fascinating challenge of writing correct programs.

[1] Nadia Polikarpova, Julian Tschannen, and Carlo A. Furia: *A Fully Verified Container Library*, in Proceedings of 20th International Symposium on Formal Methods (FM 15), 2015. (Best paper award.)

[2] Martin Nordio, Cristiano Calcagno, Peter Müller and Bertrand Meyer: *A Sound and Complete Program Logic for Eiffel*, in *Proceedings of TOOLS 2009 (Technology of Object-Oriented Languages and Systems)*, Zurich, June-July 2009, eds. M. Oriol and B. Meyer, Springer LNBIP 33, June 2009.

[3] Boogie page at MSR, see here for publications and other information.

[4] Z3 was also originally from MSR and has been open-sourced, one can get access to publications and other information from its Wikipedia page.

[5] Carlo Furia, Bertrand Meyer and Sergey Velder: *Loop invariants: Analysis, Classification and Examples*, in *ACM Computing Surveys*, vol. 46, no. 3, February 2014. Available here.

[6] Dynamic programming is a form of recursion removal, turning a recursive algorithm into an iterative one by using techniques known as “memoization” and “bottom-up computation” (Berry). In this transformation, the invariant plays a key role. I will try to write this up some day as it is a truly elegant and illuminating explanation.

The post Getting a program right, in nine episodes appeared first on Bertrand Meyer's technology+ blog.

]]>The post A little Schubertiade to brighten the gloom appeared first on Bertrand Meyer's technology+ blog.

]]>OK, not quite Horowitz and Rubinstein yet… (When the Carnegie Hall recital comes — will there ever be concerts at Carnegie Hall again? — I promise to post the announcement here with an offer for discounted tickets.) But I hope that in these trying times for all of us it brightens your day as it does not cease to brighten mine.

*(Note added 21 March: there is already a complete and better rehearsed version — I have updated the above link to point to it.)*

The post A little Schubertiade to brighten the gloom appeared first on Bertrand Meyer's technology+ blog.

]]>The post Notations you didn’t even know you could use appeared first on Bertrand Meyer's technology+ blog.

]]>∃ c: s ¦ moisture (c) = soft

This is obviously mathematics. To express such a property in a programming language, you have to write a function containing a loop that iterates through the elements of s. Right?

Wrong. The above construct is valid Eiffel. It’s a consequence of recent syntax extensions that retain all the simplicity and consistency of the language but take full advantage of Unicode. Of course you do not have Unicode characters such as `∃`

on you keyboard, but EiffelStudio’s completion mechanism inserts them for you.

To see how this works, just read Alexander Kogtenkov’s recent blog post on the topic.

The post Notations you didn’t even know you could use appeared first on Bertrand Meyer's technology+ blog.

]]>The post Serious newspapers: now is the moment to unlock Coronavirus material, or incur eternal shame appeared first on Bertrand Meyer's technology+ blog.

]]>- Extoll their grandiose proclamations of supposed devotion to public service.
- Charge for access to the epidemic that is scaring the world.

In a meeting I recently attended, someone was saying that “*the media has hyped the crisis*”. About the mainstream media, this reproach is incorrect and unfair: articles have generally been measured and informative, explaining the situation and calling on experts.

But such solid content sits behind paywalls! Free sources, particularly on social networks, are where you find the hype, the crazy theories and the lies.

Rightly or wrongly, many people around the world are panicking. They need a serious source of information and they are not all able to pay for it, especially if it comes from many sources to which one must independently subscribe.

Newspaper owners, this is your moment of truth, or of eternal shame. Free Covid-19 content *now* and *without restrictions* until this crisis ends.

We are fed up with your self-professions of sanctity and want you to fulfill your elementary social duty. You should have started to do this weeks ago already.

It’s not even bad for business — it will attract new, grateful, supportive subscribers who will stay with you for a long time.

The simple, obvious, honest thing to do.

I, for one, pledge never in the future to give one cent, peso or kopeck in the future to any publication that continues its current selfish and abhorrent policy of charging for life-and-death information that the world craves and needs.

The post Serious newspapers: now is the moment to unlock Coronavirus material, or incur eternal shame appeared first on Bertrand Meyer's technology+ blog.

]]>The post Time to live up to the boasting appeared first on Bertrand Meyer's technology+ blog.

]]>The epidemic expert Jonathan Quick, interviewed by the* Guardian*, had this remarkable observation, talking about Covid-19: ** news tends to be behind paywalls, while fake news is free**. The

Granted, every company (except maybe the *Washington Post*, since I have a feeling I am ordering enough from Amazon already) is entitled to earn money. But not all companies claim that their business model is about saving the world. My dear self-praising press, if you are really as generously public-minded as you are, here is a good way to demonstrate it. People around the world are genuinely worried about the Coronavirus epidemic and eager for serious information, if only to counter rumors and conspiracy theories. They eagerly seek credible, validated information that has gone through professional vetting, but many of them cannot afford to subscribe to all the relevant sources.

A few days before and after major elections, outlets such as the NYT and Wapo generally make their political articles free-access. The current health scare is an even more serious occasion.

This is the time for all serious news media around the world to show that their grand declarations of philanthropy are not just words.

We, the readers, should vociferously demand that as a public service these press organs immediately make all Covid-19 news, reports and analyses free-access.

The post Time to live up to the boasting appeared first on Bertrand Meyer's technology+ blog.

]]>The post In the scary land of irrational discourse appeared first on Bertrand Meyer's technology+ blog.

]]>She commented “*I am totally bummed to announce that we have retracted last year’s paper on enzymatic synthesis of beta-lactams*” and “*it is painful to admit, but important to do so*” and “t*he work has not been reproducible*” and “*I apologize to all*” and “*I was a bit busy when this was submitted, and did not do my job well*”.

Not very unusual news; this kind of thing happens all the time as part of the normal process of research and publication. (This just in! Scientists are human! They make mistakes once in a while! Full story at 11!)

Perhaps this one is slightly more worthy of notice because the lead author is a Nobel prize winner. Time for some rejoicing (Schadenfreude as it is called in good English) for anyone who is not a Nobel prize winner: haha, you think you are so smart but you mess up too. It never hurts to have an occasional reminder that we should not deify anyone. But hardly prime-time news.

Well, it is prime-time news for Fox News, which devotes a whole article to the matter. OK, I know, Fox News. And yes, it does pain me to include a hyperlink to a foxnews.com page in this otherwise perfectly decent, civilized, family-safe blog. But in fact that particular article is not by itself outrageous. Suspicious, yes: why such a sudden focus on a minor scientific episode in a news source not particularly famous (I hope you admire my gift for euphemism) for its extensive coverage of the frontlines of scientific research? But whatever its ultimate agenda the article itself is factual, not judgmental.

What is striking is the avalanche of reader comments on that page. If you go and take a look at them, be prepared; put on your parka. Reading these comments will be, for many of us, a peek into a completely different world. A world that we vaguely know exists, but do not actually visit.

It is not a nice world to venture into: full of bile, frustration, resentment, jealousy, conspiracy theories, slander, attacks on anyone trying to take a rational approach to issues, with hardly a pleasant or optimistic note ever. It is not a world one wants to visit often, but reading such a page is an eye-opener for anyone who accepts the premises of rational thinking and might believe that they are universally accepted.

“Striking”, I wrote. *Scary* is a more apposite word. With the kind of nonsense-spouting and science-bashing that appears in countless messages in the comments section of the page, one can fear the worst regarding questions that face our society, for which rational, science-based advice is critical. (Yes, coronavirus, I am looking at you!)

Very few of the comments on the page says the obvious: it is not good to make errors, but errors will occur, and the scientist should be commended for checking further and coming out with the admission that her study had flaws. As far as we know the initiative came from her, spontaneously. It is one of the signs of the healthiness of science that we always question results. We question those of other people (there are plenty of sites, such as pubpeer and forbetterscience, entirely devoted to tracking and debunking flawed research). We also question our own: partly to avoid the humiliation of having someone else report one of our mistakes before we do; but also because of the good scientist’s natural search for intellectual honesty.

Most of the article commenters do not mention this key lesson of the incident; the Nobel prize winner’s integrity. For them, the article retraction demonstrates that… the entire edifice of science is flawed! For example:

*She’s a liberal… I thought her being wrong was understood.*

*Now we need to find an honest Climate Change researcher to admit that their computer models are faulty and much of their “data” is fake.*

*Integrity! Now if the “scientists” who have fabricated Global Warming/ Climate Change, whatever, “research” would come forward with admissions about their flawed, fallacious “research” we would be golden.*

*Now if we could get the climate change “scientists” to do the same maybe some credibility could be restored to the field.*

and so on ad nauseam. (Not a figure of style — reading these comments is truly nauseating.) In reality the retraction demonstrates, or rather illustrates (one example is not a demonstration), the reverse of these assertions: that the scientific process includes its own correction mechanisms. To use a computer scientist’s terminology, it is not fault-free (no scientist ever claimed anything like that) but fault-tolerant.

Of course the reason the Fox News crowd is suddenly so interested in science is not (one imagines) science per se but the science of climate change. Comment after comment uses the article, as illustrated by the above examples, to dismiss the scientific consensus on the reports of the United Nations’ Intergovernmental Panel on Climate Change. In other words: the retraction of one three-author paper on beta-lactams proves that the the work of hundreds of scientists producing thousands of articles on climatology over several decades is flawed? The logic of such a deduction is… shaky.

The modern world is based, through technology, on science. To post on the Web their absurd rejections of scientifically established facts, the Fox News readers couldn’t do without relying on mobile phones, mobile networks, software systems, computers and other extraordinary achievements of human intelligence, the result of centuries of patient cumulative application of the same scientific principles and techniques that these posts ridicule. They are stuck in a pre-scientific mindset, dominated by the kind of magical thinking that the founders of modern thought had to overcome between the 16th and 18th century, as brilliantly analyzed by Gaston Bachelard’s Formation of the Scientific Mind.

Somehow they skipped what the rest of us learn in grade school (that two plus two equals four, cause precedes effect and so on). They are many, they vote, they think they are right and the rest of the world is wrong, hold these beliefs very strongly (Dunning-Kruger effect), and put the world at risk.

The post In the scary land of irrational discourse appeared first on Bertrand Meyer's technology+ blog.

]]>The post Getting your priorities right appeared first on Bertrand Meyer's technology+ blog.

]]>The message reads:

Total wants to save on costs. Soap is money.

Fine. But on the matter of hand-washing one might (perhaps) think, in the current circumstances, of more urgent advice?

The post Getting your priorities right appeared first on Bertrand Meyer's technology+ blog.

]]>The post Call for suggestions: beauty appeared first on Bertrand Meyer's technology+ blog.

]]>I obviously have my own examples but am interested in more. If you have some that you feel should be considered for inclusion, perhaps because you experienced a “Wow!” effect when you encountered them, please tell me. I am only asking for names or general pointers, not an in-depth analysis (that’s my job). To avoid having my thunder stolen I would prefer that you alert me by email. I will give credit for examples not previously considered.

Thanks!

*Abstract of the talk as published:*

*Scientists often cite the search for beauty as one of their primary guiding forces. Programming and software engineering offer an inexhaustible source of astoundingly beautiful ideas, from strikingly elegant algorithms and data structures to powerful principles of methodology and language design.*

*Defining beauty is elusive, but true beauty imposes itself in such a way as to remove any doubt. Drawing comparisons from art, literature and other endeavours. He will show a sample of ideas from all walks of software, directly understandable to a wide audience of non-software-experts, offering practical applications in technology that we use daily, and awe-inspiring in their simplicity and elegance.*

The post Call for suggestions: beauty appeared first on Bertrand Meyer's technology+ blog.

]]>The post An annoying practice from another age appeared first on Bertrand Meyer's technology+ blog.

]]>If you try to copy-paste such a pseudo-address into an email client so as to fix it there, you often have to spend some time fighting the email client’s knowledge of what an email address looks like. It can result in errors and bounced mail. Not the world’s worst scandal but an annoying waste of time.

An address written out in that form is a way for the page owner to announce to the cognoscenti: “*I am a computer scientist and hence very knowledgeable about the ways of the Internet; I know that spammers run bots to harvest addresses. See how I defeat them.*”

So 1995!

Both spam and defenses against it are no longer what they were back then. Anyone who manages to use email effectively is protected, even without knowing it, by spam blockers, which have become quite good. (According to a specialized site, 14.5 *billion* spam emails are sent every day, so without these protections we would all be be drowning in spam, which for most people is not the case.)

As to any spam harvesters who are really interested in computer science researchers, they are most likely able anyway to write a little regular expression analyzer that captures the vast majority of the supposedly obfuscated addresses.

If you really want strangers to be able to email you without making your address visible to the spammers, and are a CS person, just include in your Web page a few lines of Javascript that, without revealing the email address in the HTML code, will display something like “Here is my email address”, in such a way that a visitor who clicks on Here gets a new-email window with your email address pre-filled. Not very hard — I use this trick on my own home page and am certainly not a Javascript expert.

But I suspect that as long as you are prepared to let people email you, even just letting your email address appear in clear is not going to result in catastrophe. Your organization’s or ISP’s spam filter is protecting you.

Come on. This is 2020. Windows 95 and the OJ Simpson trial are no longer the news of the day. Time to stop worrying about what no longer matters, and stop bothering people who are just trying to reach you.

Down with corny address obfuscation!

The post An annoying practice from another age appeared first on Bertrand Meyer's technology+ blog.

]]>The post LASER 2020 in Elba Island: DevOps, Microservices and more, first week of June appeared first on Bertrand Meyer's technology+ blog.

]]>- Fabio Casati, ServiceNow and University of Trento, and Kannan Govindarajan from ServiceNow on
**Taking AI from research to production – at scale**. - Adrian Cockcroft, Amazon Web Services, on
**Building and Operating Modern Applications**. - Elisabetta Di Nitto, Politecnico di Milano.
- Valérie Issarny, INRIA, on
**The Web for the age of the IoT**. - Erik Meijer, Facebook, on
**Software Development At Scale**. - Me, on
**Software from beginning to end: a comprehensive method**.

As always, the setup is the incomparable environment of the Hotel del Golfo in Procchio, Elba Island off the coast of Tuscany, ideal at that time of year (normally good weather, warm but not hot, few tourists). The school is intensive but there is time to enjoy the beach, the hotel’s amenities and the wonderful of environment of Elba (wake up your inner Napoleon). The school has a fairly small size and everyone lives under the same (beautiful) roof, so there is plenty of time for interaction with the speakers and other participants.

About these participants: the school is intended for engineers and managers in industry as well as researchers and PhD student. In fact it’s a mix that one doesn’t find that often, allowing for much cross-learning.

Another way to put it is that this is now the 16th edition of the school (it started in 2004 but we skipped one year), so it cannot be doing everything wrong.

The post LASER 2020 in Elba Island: DevOps, Microservices and more, first week of June appeared first on Bertrand Meyer's technology+ blog.

]]>The post Two talks by Gilles Brassard in Zurich and Schaffhausen, this Wednesday appeared first on Bertrand Meyer's technology+ blog.

]]>- One at the University of Zurich, at 11:15 (session start at 10:30) on “The Art of Secret Communication in a Quantum World””.
- The other at the Schaffhausen Institute of Technology at 18:30 (session start at 17:30, talks followed by Apéro) in Schaffhausen, with the title “What if Einstein was right after all? Once again…”.

In other words, morning talk more technical (quantum cryptography), evening talk more general.

Abstracts and other details at https://sit.org/insights, also registration (not required but recommended).

— Bertrand Meyer

The post Two talks by Gilles Brassard in Zurich and Schaffhausen, this Wednesday appeared first on Bertrand Meyer's technology+ blog.

]]>The post Talk on requirements at UC Santa Barbara tomorrow appeared first on Bertrand Meyer's technology+ blog.

]]>The abstract and rest of the information are here.

I will spend the last few minutes of the talk discussing other current developments (verification, concurrency).

The post Talk on requirements at UC Santa Barbara tomorrow appeared first on Bertrand Meyer's technology+ blog.

]]>The post This Wednesday in Nice: survey talk on the Eiffel method appeared first on Bertrand Meyer's technology+ blog.

]]>

With society’s growing reliance on IT systems, the ability to write high-quality software is ever more critical. While a posteriori verification techniques have their role, there is no substitute for methods and tools that provide built-in quality (“correctness by construction”)and scale up to very large systems. For several decades my colleagues and I have been building such a method, based in particular on the concept of Design by Contract, the associated tools and the supporting language, Eiffel. The scope is wide, encompassing all aspects of the software development process, from requirements and design to implementation and verification. I will present an overview of the approach, show what it can yield, and discuss remaining open issues.

This talk is meant for everyone, whether from industry or academia, with an interest in practical techniques for engineering high-quality software.

No registration is required. The presentation will be in English.

*Gilles Kahn, a brilliant computer scientist who died too young, was for a while director of INRIA.

The post This Wednesday in Nice: survey talk on the Eiffel method appeared first on Bertrand Meyer's technology+ blog.

]]>The post Defining and classifying requirements (new publication) appeared first on Bertrand Meyer's technology+ blog.

]]>I have written on requirements in this blog, including very recently, and will continue in forthcoming installments. For today I want to point to a recent article [1], presented at the newly revived TOOLS conference in October. It attempts to bring some order and rigor to the basic definitions in the field.

From the abstract:

Requirements engineering is crucial to software development but lacks a precise definition of its fundamental concepts. Even the basic definitions in the literature and in industry standards are often vague and verbose.

To remedy this situation and provide a solid basis for discussions of requirements, this work provides precise definitions of the fundamental requirements concepts and two systematic classifications: a taxonomy of requirement elements (such as components, goals, constraints…) ; and a taxonomy of possible relations between these elements (such as “extends”, “excepts”, “belongs”…).

The discussion evaluates the taxonomies on published requirements documents; readers can test the concepts in two online quizzes.

The intended result of this work is to spur new advances in the study and practice of software requirements by clarifying the fundamental concepts.

This version is a first step; we are aware of its limitations and are already revising the definitions and taxonomy. The project is aimed at providing a solid foundation for a delicate area of software engineering and it will take some time to get it completely right. Still, I think the paper as it is already introduces important concepts. I will within the next two weeks write a more detailed blog article summarizing some of them.

[1] Bertrand Meyer, Jean-Michel Bruel, Sophie Ebersold, Florian Galinier, Alexandr Naumchev, *The Anatomy of Requirements*, in TOOLS 51, Software Technology: Methods and Tools

Innopolis, Russia, October 15–17, 2019, pages 10-40, available here (Springer site, paywall) and here (arXiv draft).

The post Defining and classifying requirements (new publication) appeared first on Bertrand Meyer's technology+ blog.

]]>The post What happened to the kilogram? Schaffhausen, 16 December appeared first on Bertrand Meyer's technology+ blog.

]]>The culmination of the day is a talk by Prof. Ketterle in the evening on “*What happened to the kilogram*?”. From the abstract:

For 130 years, a cylinder made of a platinum-iridium alloy stored in Saint-Cloud near Paris was the official definition of a kilogram, the basic unit of mass. This all changed on May 20 of this year: a kilo is now be defined by a fundamental constant of nature known, the Planck constant, which relates the energy of a photon to its frequency: 6.62607015 times 10-34 kilograms times square meters per second. Try that the next time you buy a kilo of asparagus.

Sounds complicated? For MIT’s Wolfgang Ketterle, a Nobel Prize winner, “Conceptually, the definition is very simple”.

Simple? Really? Come to Schaffhausen and hear for yourself whether Prof. Ketterle can make the new kilogram crystal-clear to common mortals.

Earlier in the day, he will give a talk in German on new forms of materials that appear at temperatures near the absolute zero, complete with demonstrations.

More generally, there is a full set of talks throughout the day about various aspects of advanced physics and computer science, and even a “quantum magician”, plus music and food.

Schaffhausen is about 40 minutes from Zurich (or Zurich airport) by train or car.

Attendance is free but registration is recommended. One can register for the full day or for some events only. See further information and registration form here.

The post What happened to the kilogram? Schaffhausen, 16 December appeared first on Bertrand Meyer's technology+ blog.

]]>The post Are my requirements complete? appeared first on Bertrand Meyer's technology+ blog.

]]>Practitioners are indeed, as in most of my articles, the intended audience. I emphasize this point right at the start because if you glance at the rest of the text you will see that it contains (horror of horrors) some mathematical formulae, and might think “this is not for me”. It is! The mathematics is very simple and my aim is practical: to shed light on an eternal question that faces anyone writing requirements (whatever the style, traditional or agile): how can I be sure that a requirements specification is complete?

To a certain extent you cannot. But there is better answer, a remarkably simple one which, while partial, helps.

The better answer is called “sufficient completeness” and comes from the theory of abstract data types. It was introduced in a 1978 article by Guttag and Horning [1]. It is also implicit in a more down-to-earth document, the 1998 IEEE standard on how to write requirements [2].

There is nothing really new in the present article; in fact my book *Object-Oriented Software Construction *[3] contains an extensive discussion of sufficient completeness (meant to be more broadly accessible than Guttag and Horning’s scholarly article). But few people know the concepts; in particular very few practitioners have heard of sufficient completeness (if they have heard at all of abstract data types). So I hope the present introduction will be useful.

The reason the question of determining completeness of requirements seems hopeless at first is the natural reaction: complete with respect to *what*? To know that the specification is complete we would need a more general description of all that our stakeholders want and all the environment constraints, but this would only push the problem further: how do we know that such description itself is complete?

That objection is correct in principle: we can never be sure that we did not forget something someone wanted, or some property that the environment imposes. But there also exist more concrete and assessable notions of completeness.

The IEEE standard gives three criteria of completeness. The first states that “all requirements” have been included, and is useless, since it runs into the logical paradox mentioned above, and is tautological anyway (the requirements are complete if they include all requirements, thank you for the information!). The second is meaningful but of limited interest (a “bureaucratic” notion of completeness): every element in the requirements document is numbered, every cross-reference is defined and so on. The last criterion is the interesting one: *“Definition of the responses of the software to all realizable classes of input data in all realizable classes of situations*”. Now this is meaningful. To understand this clause we need to step back to sufficient completeness and, even before that, to abstract data types.

Abstract data types will provide our little mathematical excursion (our formal picnic in the words of an earlier article) in our study of requirements and completeness. If you are not familiar with this simple mathematical theory, which every software practitioner should know, I hope you will benefit from the introduction and example. They will enable us to introduce the notion of sufficient completeness formally before we come back to its application to requirements engineering.

Abstract data types are the mathematical basis for object-oriented programming. In fact, OO programming but also OO analysis and OO design are just a realization of this mathematical concept at various levels of abstraction, even if few OO practitioners are aware of it. (Renewed reference to [3] here if you want to know more.)

An ADT (abstract data type) is a set of objects characterized not by their internal properties (what they are) but by the operations applicable to them (what they have), and the properties of these operations. If you are familiar with OO programming you will recognize that this is exactly, at the implementation level, what a *class *is. But here we are talking about mathematical objects and we do not need to consider implementation.

An example of a type defined in this way, as an ADT, is a notion of POINT on a line. We do not say how this object is represented (a concept that is irrelevant at the specification level) but how it appears to the rest of the world: we can create a new point at the origin, ask for the coordinate of a point, or move the point by a certain displacement. The example is the simplest meaningful one possible, but it gives the ideas.

An ADT specification has three part: Functions, Preconditions and Axioms. Let us see them (skipping Preconditions for the moment) for the definition of the POINT abstract data type.

The functions are the operations that characterize the type. There are three kinds of function, defined by where the ADT under definition, here POINT, appears:

- Creators, where the type appears only among the results.
- Queries, where it appears only among the arguments.
- Commands, where it appears on both sides.

There is only one creator here:

*new*: → POINT

*new *is a function that takes no argument, and yields a point (the origin). We will write the result as just *new *(rather than using empty parentheses as in *new *()).

Creators correspond in OO programming to constructors of a class (creation procedures in Eiffel). Like constructors, creators may have arguments: for example instead of always creating a point at the origin we could decide that *new *creates a point with a given coordinate, specifying it as INTEGER → POINT and using it as *new *(i) for some integer i (our points will have integer coordinates). Here for simplicity we choose a creator without arguments. In any case the new type, here POINT, appears only on the side of the results.

Every useful ADT specification needs at least one creator, without which we would never obtain any objects of the type (here any points) to work with.

There is also only one query:

*x*: POINT → INTEGER

which gives us the position of a point, written *x* (p) for a point p. More generally, a query enables us to obtain properties of objects of the new type. These properties must be expressed in terms of types that we have already defined, like INTEGER here. Again there has to be at least one query, otherwise we could never obtain usable information (information expressed in terms of what we already know) about objects of the new type. In OO programming, queries correspond to fields (attributes) of a class and functions without side effects.

And we also have just one command:

*move*: POINT × INTEGER → POINT

a function that for any point p and integer i and yields a new point, *move *(p, i). Again an ADT specification is not interesting unless it has at least one command, representing ways to modify objects. (In mathematics we do not actually modify objects, we get new objects. In imperative programming we will actually update existing objects.) In the classes of object-oriented programming, commands correspond to procedures (methods which may change objects).

You see the idea: define the notion of POINT through the applicable operations.

Listing their names and the types of their arguments types results (as in POINT × INTEGER → POINT) is not quite enough to specify these operations: we must specify their fundamental properties, without of course resorting to a programming implementation. That is the role of the second component of an ADT specification, the axioms.

For example I wrote above that *new *yields the origin, the point for which *x* = 0, but you only had my word for it. My word is good but not good enough. An axiom will give you this property unambiguously:

*x* (*new*) = 0 — A0

The second axiom, which is also the last, tells us what *move* actually does. It applies to any point p and any integer m:

*x* (*move* (p, m)) = *x* (p) + m — A1

In words: the coordinate of the point resulting from moving p by m is the coordinate of p plus m.

That’s it! (Except for the notion of precondition, which will wait a bit.) The example is trivial but this approach can be applied to any number of data types, with any number of applicable operations and any level of complexity. That is what we do, at the design and implementation level, when writing classes in OO programming.

Sufficient completeness is a property that we can assess on such specifications. An ADT specification for a type T (here POINT) is sufficiently complete if the axioms are powerful enough to yield the value of any well-formed query expression in a form not involving T. This definition contains a few new terms but the concepts are very simple; I will explain what it means through an example.

With an ADT specification we can form all kinds of expressions, representing arbitrarily complex specifications. For example:

*x* (*move *(*move* (*move* (*new*, 3), *x* (*move* (*move* (*new*, -2), 4))), -6))

This expression will yield an integer (since function* x* has INTEGER as its result type) describing the result of a computation with points. We can visualize this computation graphically; note that it involves creating two points (since there are two occurrences of *new*) and moving them, using in one case the current coordinate of one of them as displacement for the other. The following figure illustrates the process.

The result, obtained informally by drawing this picture, is the x of P5, that is to say -1. We will derive it mathematically below.

Alternatively, if like most programmers (and many other people) you find it more intuitive to reason operationally than mathematically, you may think of the previous expression as describing the result of the following OO program (with variables of type POINT):

**create **p — In C++/Java syntax: p = new POINT();

**create **q

p.move (3)

q.move (-2)

q.move (4)

p.move (q.x)

p.move (-6)

**Result **:= p.x

You can run this program in your favorite OO programming language, using a class POINT with *new*, *x *and *move,* and print the value of **Result**, which will be -1.

Here, however, we will stay at the mathematical level and simplify the expression using the axioms of the ADT, the same way we would compute any other mathematical formula, applying the rules without needing to rely on intuition or operational reasoning. Here is the expression again (let’s call it i, of type INTEGER):

i* = **x* (*move *(*move* (*move* (*new*, 3), *x* (*move* (*move* (*new*, -2), 4))), -6))

A *query *expression is one in which the outermost function being applied, here x, is a query function. Remember that a query function is one which the new type, here POINT, appears only on the left. This is the case with *x*, so the above expression i is indeed a query expression.

For sufficient completeness, query expressions are the ones of interest because their value is expressed in terms of things we already know, like INTEGERs, so they are the only way we can concretely obtain directly usable information the ADT (to de-abstract it, so to speak).

But we can only get such a value by applying the axioms. So the axioms are “sufficiently complete” if they always give us the answer: the value of any such query expression.

Let us see if the above expression i satisfies this condition of sufficient completeness. To make it more tractable let us write it in terms of simpler expressions (all of type POINT), as illustrated by the figure below:

p1 = *move *(*new*, 3)

p2= *move *(*new*, -2)

p3= *move *(p2, 4)

p4= *move *(p1, *x* (p3))

p5= *move *(p4, -6)

i = *x* (p5)

(You may note that the intermediate expressions roughly correspond to the steps in the above interpretation of the computation as a program. They also appear in the illustrative figure repeated below.)

Now we start applying the axioms to evaluating the expressions. Remember that we have two axioms: A0 tells us that *x* (*new*) = 0 and A1 that *x* (*move* (p, m)) = *x* (p) + m. Applying A1 to the definition the expression i yields

i = *x *(*p4*) – 6

= i4 – 6

if we define

i4 = *x *(*p4*) — Of type INTEGER

We just have to compute i4. Applying A1 to the definion of p4 tells us that

i4 = *x* (p1) + *x* (p3)

To compute the two terms:

- Applying A1 again, we see that the first term
*x*(p1) is*x**(new*) + 3, but then A0 tells us that*x*(*new*) is zero, so*x*(p1) is 3. - As to
*x*(p3), it is, once more from A1,*x*(p2) + 4, and*x*(p2) is (from A1 then A0), just -2, so*x*(p3) is 2.

In the end, then, i4 is 5, and the value of the entire expression i = i4 – 6 is -1. Good job!

The successful computation of i was just a derivation for one example, showing that in that particular case the axioms yield the answer in terms of an INTEGER. How do we go from one example to an entire specification?

The bad news first: like all interesting problems in programming, sufficient completeness of an ADT specification is theoretically undecidable. There is no general automatic procedure that will process an ADT specification and print out ““sufficiently complete” or “not sufficiently complete”.

Now that you have recovered from the shock, you can share the computer scientist’s natural reaction to such an announcement: so what. (In fact we might define the very notion of computer scientist as someone who, even before he brushes his teeth in the morning — if he brushes them at all — has already built the outline of a practical solution to an undecidable problem.) It is enough that we can find a way to determine if a *given *specification is sufficiently complete. Such a proof is, in fact, the computer scientist’s version of dental hygiene: no ADT is ready for prime time unless it is sufficiently complete.

The proof is usually not too hard and will follow the general style illustrated for our simple example.

We note that the definition of sufficient completeness said: “the axioms are powerful enough to yield the value of any *well-formed* query expression in a form not involving the type”. I have not defined “well-formed” yet. It simply means that the expressions are properly structured, with the proper syntax (basically the correct matching of parentheses) and proper number and types of arguments. For example the following are not well-formed (if p is an expression of type POINT):

*move *(p, 55( — Bad use of parentheses.

*move *(p) — Wrong number of arguments.

*move *(p, p) — Wrong type: second argument should be an integer.

Such expressions are nonsense, so we only care about well-formed expressions. Note that in addition to *new*, *x* and *move *, an expression can use integer constants as in the example (although we could generalize to arbitrary integer expressions). We consider an integer constant as a query expression.

We have to prove that with the two axioms A0 and A1 we can determine the value of any query expression i. Note that since the only query functions is* x*, the only possible form for i, other than an integer constant, is *x* (p) for some expression p of type POINT.

The proof proceeds by induction on the number *n *of parenthesis pairs in a query expression i.

There are two base steps:

*n*= 0: in that case i can only be an integer constant. (The only expression with no parentheses built out of the ADT’s functions is*new*, and it is not a query expression.) So the value is known. In all other cases i will be of the form*x*(*p*) as noted.*n*= 1: in that case p can only be*new*, in other words i = x (*new*), since the only function that yields points, other than*new*, is*move*, and any use of it would add parentheses. In this case axiom A0 gives us the value of i: zero.

For the induction step, we consider i with *n* + 1 parenthesis pairs for n > 1. As noted, i is of the form x (p), so p has exactly n parenthesis pairs. p cannot be *new *(which would give 0 parenthesis pairs and was taken care of in the second base step), so p has to be of the form

p = *move *(p’, i’) — For expressions p’ of type POINT and i’ of type INTEGER.

implying (since i = *x* (p)) that by axiom A1, the value of i is

*x* (p’) + i’

So we will be able to determine the value of i if we can determine the value of both *x* (p’) and i’. Since p has *n* parenthesis pairs and p = *move *(p’, i’), both p’ and i’ have at most *n* – 1 parenthesis pairs. (This use of *n* – 1 is legitimate because we have two base steps, enabling us to assume *n* > 1.) As a consequence, both *x* (p’) and i’ have at most *n* parenthesis pairs, enabling us to deduce their values, and hence the value of i, by the induction hypothesis.

Most proofs of sufficient completeness in my experience follow this style: induction on the number of parenthesis pairs (or the maximum nesting level).

I left until now the third component of a general ADT specification: preconditions. The need for preconditions arises because most practical specifications need some of their functions to be *partial*. A partial function from X to Y is a function that may not yield a value for some elements of X. For example, the inverse function on real numbers, which yields 1 / a for x, is partial since it is not defined for a = 0 (or, on a computer, for non-zero but very small a).

Assume that in our examples we only want to accept points that lie in the interval [-4, +4]:

We can simply model this property by turning move into a partial function. It was specified above as

*move*: POINT × INTEGER → POINT

The ordinary arrow → introduces a total (always defined) function. For a partial function we will use a crossed arrow ⇸, specifying the function as

*move*: POINT × INTEGER ⇸ POINT

Other functions remain unchanged. Partial functions cause trouble: for f in X ⇸ Y we can no longer cheerfully use f (x) if f is a partial function, even for x of the appropriate type X. We have to make sure that x belongs to the *domain *of f, meaning the set of values for which f is defined. There is no way around it: if you want your specification to be meaningful and it uses partial functions, you must specify explicitly the domain of each of them. Here is how to do it, in the case of *move*:

*move *(p: POINT; d: INTEGER) **require **|*x* (p) + d | < 5 — where |…| is absolute value

To adapt the definition (and proofs) of sufficient completeness to the possible presence of partial functions:

- We only need to consider (for the rule that axioms must yield the value of query expressions) well-formed expressions that satisfy the associated preconditions.
- The definition must, however, include the property that axioms always enable us to determine whether an expression satisfies the associated preconditions (normally a straightforward part of the proof since preconditions are themselves query expressions).

Updating the preceding proof accordingly is not hard.

The definition of sufficient completeness is of great help to assess the completeness of a requirements document. We must first regretfully note that for many teams today requirements stop at “use cases” (scenarios) or “user stories”. Of course these are not requirements; they only describe individual cases and are to requirements what tests are to programs. They can serve to check requirements, but do not suffice as requirements. I am assuming real requirements, which include descriptions of behavior (along with other elements such as environment properties and project properties). To describe behaviors, you will define operations and their effects. Now we know what the old IEEE standard is telling us by stating that complete requirements should include

definition of the responses of the software to all realizable classes of input data in all realizable classes of situations

Whether or not we have taken the trouble to specify the ADTs, they are there in the background; our system’s operations reflect the commands, and the effects we can observe reflect the queries. To make our specification complete, we should draw as much as possible of the (mental or explicit) matrix of possible effects of all commands on all queries. “As much as possible” because software engineering is engineering and we will seldom be able to reach perfection. But the degree of fullness of the matrix tells us a lot (possible software metric here?) about how close our requirements are to completeness.

I should note that there are other aspects to completeness of requirements. For example the work of Michael Jackson, Pamela Zave and Axel van Lamsweerde (more in some later article, with full references) distinguishes between business goals, environment constraints and system properties, leading to a notion of completeness as how much the system properties meet the goals and obey the constraints [4]. Sufficient completeness operates at the system level and, together with its theoretical basis, is one of those seminal concepts that every practicing software engineer or project manager should master.

[1] John V. Guttag, Jim J. Horning: *The Algebraic Specification of Abstract Data Type*s, in *Acta Informatica*, vol. 10, no. 1, pages 27-52, 1978, available here from the Springer site. This is a classic paper but I note that few people know it today; in Google Scholar I see over 700 citations but less than 100 of them in the past 8 years.

[2] IEEE: *Recommended Practice for Software Requirements Specifications*, IEEE Standard 830-1998, 1998. This standard is supposed to be obsolete and replaced by newer ones, more detailed and verbose, but it remains the better reference: plain, modest and widely applied by the industry. It does need an update, but a good one.

[3] Bertrand Meyer, *Object-Oriented Software Construction*, 2nd edition, Prentice Hall, 1997. The discussion of sufficient completeness was in fact already there in the first edition from 1988.

[4] With thanks to Elisabetta Di Nitto from Politecnico di Milano for bringing up this notion of requirements completeness.

* A version of this article was first published on the Communications of the ACM blog.*

The post Are my requirements complete? appeared first on Bertrand Meyer's technology+ blog.

]]>The post Adult entertainment appeared first on Bertrand Meyer's technology+ blog.

]]>

Let us make sure we understand: here children are not allowed, but playing is.

As a consequence such playing must be performed by non-children only. Adults welcome to play!

Maybe it is actually not the intended meaning. Instead of

(and (not (allowed children)) (allowed playing))

the desired parsing may be

(not (allowed (playing children)))

One hint in favor of this second interpretation is that in practice people seldom put up signs to advertise that something *is* allowed.

So new-line must be a mere break character equivalent to space, not a semantics-carrying delimiter.

Somewhat reminiscent of eats shoots and leaves. Here it is not even a punctuation mark, just a humble new-line, but its visual effect is strong all the same.

The post Adult entertainment appeared first on Bertrand Meyer's technology+ blog.

]]>The post June LASER school, Elba, on Devops, Microservices… appeared first on Bertrand Meyer's technology+ blog.

]]>This is the 16th edition of the school (already) and, as always, rests on the LASER recipe of “Sea, Sun and Software”: densely packed lectures by top experts with the opportunity to enjoy the extraordinary surroundings of the Island of Elba and the Hotel del Golfo’s unique food, beach and facilities, with lots of time devoted to interactions between speakers and attendees.

This year’s theme is devoted to advances in the newest Web technologies and the corresponding software engineering issues and development models.

*Arrival on May 31st, departure on June 7th.

The post June LASER school, Elba, on Devops, Microservices… appeared first on Bertrand Meyer's technology+ blog.

]]>The post Those were the days appeared first on Bertrand Meyer's technology+ blog.

]]>

I like the banner (what do you think?). It even has the correct Latin noun and verb plurals.

Anyone know where to find a university today with that kind of students, that kind of slogan, that kind of attitude and that kind of grammar? Please send me the links.

The post Those were the days appeared first on Bertrand Meyer's technology+ blog.

]]>The post Entrez sans frapper appeared first on Bertrand Meyer's technology+ blog.

]]>

(Canon EOS 70D, f/14, 1/500, ISO/4000)

The post Entrez sans frapper appeared first on Bertrand Meyer's technology+ blog.

]]>