2015-02-12, Comments


When software developers refer to “magic numbers” they mean those numeric literals which appear in a program without explanation — as if by magic. Consider the mysterious figures in this incantation:

int cigarettes()
    return 365 * 20 * 10 + 2 * 20 + 17;

Why is the 20 repeated? Does the first 20 mean the same as the second one? Could 365 be the number of days in a year? Named constants would make the code easier to read and maintain.

Some numbers truly are magical though.


The number 2147483647 is special and terrible.

It’s a substantial number, far greater than the number of goals Lionel Messi has scored or the number of hot dinners I’ve eaten, and comparable with the number of heart beats in a lifetime or the number of instructions a processor executes in a second; but it’s not that large. You’ll need more than 2147483647 bytes to install a modern operating system, let alone store your video collection. And shuffling a pack of just 52 cards has 80658175170943878571660636856403766975289505440883277824000000000000 possible outcomes.

If 2147483647 isn’t remarkable for its size it certainly has a noteworthy history. In 1772 the Swiss mathematician Leonhard Euler proved it a prime. I’m guessing it was the largest known prime at the time. Euler didn’t have a computer to hunt for primes so he narrowed the field by focusing on Mersenne numbers — numbers one less than a power of two — a strategy which remains a winner even today, when computers are networked to search.

Actually, 2147483647 is a double Mersenne prime, which is to say it takes the form 2m - 1, where m itself takes the form 2n - 1.

>>> 2**(2**5 - 1) - 1



2147483647 has a special significance for C programmers, who know it by the name INT_MAX, and for C++ programmers, who demystify the digits as std::numeric_limits<int>::max().

Remember, 2147483647 is Mersenne(Mersenne(5)), which is Mersenne(31), or 2 to the power 31 minus 1. In binary arithmetic you add a zero on the right to multiply by 2 so 2 to the 31 is 1 followed by 31 zeros; and subtracting 1 leaves 31 1’s. It’s the largest signed value you can fit in a 32 bit register.

>>> mersenne31 = 2**31-1
>>> bin(mersenne31)
>>> hex(mersenne31)
>>> mersenne31

It’s quite possible to inadvertantly increment an int which has reached INT_MAX. The result is undefined behaviour: anything could happen.

Gangnam Style

We never thought a video would be watched in numbers greater than a 32-bit integer (=2,147,483,647 views), but that was before we met PSY. Gangnam Style has been viewed so many times we had to upgrade to a 64-bit integer (9,223,372,036,854,775,808)!

youtube developers


Exactly what undefined behaviour was provoked when PSY’s popularity broke the magic limit isn’t disclosed. Maybe a server leaked account details to North Korean hackers. Or maybe the video’s viewing figures were wrong for a while.

Note that the new limit of 9,223,372,036,854,775,808 is an unsigned value, which is exempt from this flavour of undefined behaviour and wraps to zero when you bump it up.

Bugwards compatibility

There’s another reason for preferring INT_MAX to the magical 2147483647: the two values might be different. 2147483647 is 2147483647 but INT_MAX depends on the implementation.

A modern computer probably has 64 bit registers making a 64 bit int a more natural choice. However, for compatibility reasons, ints may intentionally be limited to 32 bits!

Lessons from the OuLiPo

2015-01-12, , , Comments

I’m delighted to announce that my talk, Lessons from the OuLiPo, has been accepted for the ACCU 2015 conference.

Slide 1

The talk follows up on a lightning talk I gave at my employer’s last year about Georges Perec’s masterpiece, Life A User’s Manual. Click the graphic below to play the animation and read the transcript.

Knights tour

I’ll be giving a preview of the full version here in Bristol in a couple of weeks. I hope you can come along. More details on Meetup.

In the meanwhile, the next section of this page introduces the ideas I’ll be exploring in my talk.


As software developers we often ponder what it is we do: are we architects, engineers, or scientists? Are we — gasp! — rock stars or ninjas? Which metaphors best fit? Tending a code base is like gardening. Through the seasons we encourage new growth, whilst pruning back dead code and squashing bugs. Programming is like carpentry, maybe. Select the right tool from a set kept sharp and ready for action. Programming is like cooking. Source the finest ingredients and follow the recipe.

I think there’s a more obvious metaphor. Actually there’s nothing meta- about it. It is what we do.

We’re writers.

We write to communicate and to instruct. We write for fun and profit. We edit and adapt. We rewrite. We borrow text from other writers. The languages we think we write in — C++, Python, Javascript — are actually just highly stylised dialects of our native tongue. Like poets we’re particular about punctuation and space. We have strange ideas about spelling. The texts we write, programs, are subject to formal constraints.

We’re writers bound by mathematical rules.

November, 1960. Paris. The poet Raymond Queneau organises the first meeting of a group which will become known as OuLiPo. A dozen turn up: writers, mathematicians, pataphysicians and surrealists. Their mission: to explore the literary potential of applying mathematical constraints to texts.

Of course constrained writing is nothing new — consider the haiku: 17 syllables, arranged as 3 phrases of 5, 7 and 5 syllables, a Japanese form many centuries old — and one strand of the OuLiPo’s efforts is devoted to researching past experiments and structures; but I claim it’s no coincidence the OuLiPo emerged at much the same time as our own novel form of constrained writing: computer programming.

In this talk I’ll discuss the OuLiPo in more depth, investigating the parallels between their work and ours. We’ll focus on Georges Perec, whose book Life A User’s Manual is an Oulipian tour de force. There will be some code, as well as quines, easter eggs and — as you’d expect — bugs.

Why zip when you can map?


Why zip? when you can map?

You’ve got a couple of parallel lists you’d like to combine and output, a line for each pair. Here’s one way to do it: use zip to do the combining.

>>> times = [42.12, 42.28, 42.34, 42.40, 42.45]
>>> names = ['Hickman', 'Guest', 'Burns', 'Williams']
>>> fmt = '{:20} {:.2f}'.format
>>> print('\n'.join(fmt(n, t) for n, t in zip(names, times)))
Hickman              42.12
Guest                42.28
Burns                42.34
Williams             42.40

Slightly more succinctly:

>>> print('\n'.join(fmt(*nt) for nt in zip(names, times)))

If you look at the generator expression passed into str.join, you can see we’re just mapping fmt to the zipped names and times lists.

Well, sort of.

>>> print('\n'.join(map(fmt, zip(names, times))))
Traceback (most recent call last):
IndexError: tuple index out of range

To fix this, we could use itertools.starmap which effectively unpacks the zipped pairs.

>>> from itertools import starmap
>>> print('\n'.join(starmap(fmt, zip(names, times))))
Hickman              42.12
Guest                42.28
Burns                42.34
Williams             42.40

This latest version looks clean enough but there’s something odd about zipping two lists together only to unpack the resulting 2-tuples for consumption by the format function.

Don’t forget, map happily accepts more than one sequence! There’s no need to zip after all.

Don’t zip, map!
>>> print('\n'.join(map(fmt, names, times)))

Find the average of a collection of tuples or dicts using Python


You’ve been running some tests, each of which returns a 3-tuple of numerical results — (real, user, sys) times, maybe — and you’d like to combine these into a single 3-tuple, the average result.


def average(times):
    N = float(len(times))
    return (sum(t[0] for t in times)/N,
            sum(t[1] for t in times)/N,
            sum(t[2] for t in times)/N)

If you want a more generic solution, one which works when the tuples might have any number of elements, you could do this:

def average(xs):
    N = float(len(xs))
    R = len(xs[0])
    return tuple(sum(x[i] for x in xs)/N for i in range(R))

or this:

def average(xs):
    N = float(len(xs))
    return tuple(sum(col)/N for col in zip(*xs))

The second generic variant uses zip to transpose its inputs.

Now suppose we have keyed collections of results which we want to average:

>>> times = [{'real': 34.4, 'user': 26.2, 'sys': 7.3},
             {'real': 28.7, 'user': 21.5, 'sys': 6.4},
             {'real': 29.3, 'user': 22.0, 'sys': 6.9}]

If, as in the example above, each result has the same set of keys, the average result could be calculated like this:

>>> N = float(len(times))
>>> { k : sum(t[k] for t in times)/N for k in times[0] }
{'real': 30.8, 'sys': 6.9, 'user': 23.2}

What if the inputs don’t have the same keys? Consider the contents of four fridges.

>>> fridges = [
    { 'egg': 5, 'milk': 1.700, 'sausage': 6 },
    { 'beer': 6, 'milk': 0.568, 'egg': 1 },
    { 'egg': 3, 'sausage': 4, 'milk': 0.125, 'lettuce': 1 },
    { 'carrot': 4 }]

A Counter can collect and calculate the average fridge contents.

>>> from collections import Counter
>>> total = sum(map(Counter, fridges), Counter())
>>> N = float(len(fridges))
>>> { k: v/N for k, v in total.items() }
{'sausage': 2.5, 'lettuce': 0.25, 'beer': 1.5, 'carrot': 1.0, 
 'egg': 2.25, 'milk': 0.59825}

Note that although Counters were primarily designed to work with positive integers to represent counts, there’s nothing stopping us from using floating point numbers (amount of milk in our example) in the values field.

Group When

2014-07-16, , Comments

Phil Nash’s recent tweet intrigued me.

He later clarified what he was after — and had now found — linking to a solution posted a couple of years ago by Tomas Petricek. The function groupWhen splits a sequence into groups, starting a new group whenever the predicate returns true.

 module Seq =
   /// Iterates over elements of the input sequence and groups adjacent elements.
   /// A new group is started when the specified predicate holds about the element
   /// of the sequence (and at the beginning of the iteration).
   /// For example: 
   ///    Seq.groupWhen isOdd [3;3;2;4;1;2] = seq [[3]; [3; 2; 4]; [1; 2]]
   let groupWhen f (input:seq<_>) = seq {
     use en = input.GetEnumerator()
     let running = ref true
     // Generate a group starting with the current element. Stops generating
     // when it founds element such that 'f en.Current' is 'true'
     let rec group() = 
       [ yield en.Current
         if en.MoveNext() then
           if not (f en.Current) then yield! group() 
         else running := false ]
     if en.MoveNext() then
       // While there are still elements, start a new group
       while running.Value do
         yield group() |> Seq.ofList }

Here’s a nice Haskell version coded up by @sdarlington.

Maybe takewhile and dropwhile could power a Python solution, but my first choice would be itertools.groupby. Groupby chops a sequence into subsequences, where the elements of each subsequence have the same key value. A suitable key function, in this case, must change its return value every time the sequence yields an element for which the predicate holds. It could toggle between a pair of values, for example. Or it could just count the number of times the predicate holds.

class count_p:
    ''' Return a value which increments every time the predicate holds.
    def __init__(self, pred):
        self._n = 0
        self._pred = pred
    def __call__(self, v):
        self._n += self._pred(v)
        return self._n

def group_when(pred, xs):
    return (gp for _, gp in groupby(xs, count_p(pred)))

Here, group_when accepts an iterable and returns an iterable sequence of iterable groups. Clients choose how to consume the results.

>>> def odd(v): return v % 2
>>> xs = group_when(odd, [3, 3, 2, 4, 1, 2])
>>> print([list(g) for g in xs])
[[3], [3, 2, 4], [1, 2]]

Note that count_p does something very like itertools.accumulate. Here’s another version of group_when which takes advantage of this.

def group_when(pred, xs):
    xs, ys = tee(xs)
    accu = accumulate(map(pred, ys))
    return (gp for _, gp in groupby(xs, lambda _: next(accu)))


After a short break, here’s a third version of group_when. This is the first time I’ve found a use for takewhile and dropwhile. Beware: as the teed streams xs and ys diverge, the amount of backing storage required will grow … only for the stored values to then be dropped!

from itertools import *
def group_when(p, xs):
    def notp(x): return not p(x)
    xs = iter(xs)
    while True:
        x = next(xs)
        xs, ys = tee(xs)
        yield chain([x], takewhile(notp, xs))
        xs = dropwhile(notp, ys)
def odd(x):
    return x % 2
[list(g) for g in group_when(odd, [3, 3, 2, 4, 1, 2])] # [[3], [3, 2, 4], [1, 2]]

Word Aligned, hosted by Github


To anyone who subscribes to this site’s feed, my apologies if your reader recently got filled with items you’d already seen. I can explain.

About seven years ago, I signed up for a great deal on hosting for life with a company called TextDrive. For most of the time since then, this service was actually provided by Joyent — who took over TextDrive. Then they, Joyent, said the hosting for life deal was being “sunsetted”, i.e. canned. Happily TextDrive’s original founder, Dean Allen, stepped in to revive his company and honour the original offer, which has indeed happened, though it’s been all too clear that he’s been hard to get hold of whilst the operations staff have been over-stretched.

Last week, a tweet from one of these hardworking ops tipped me off that TextDrive would soon be gone.


What to do?

After some googling I’ve chosen Github pages as the new host for Word Aligned. I’ve had to relinquish a little control over URLs to make the site truly static. (A side-effect being old content appearing with new identifiers in the RSS feed). I also won’t be configuring a web server or rooting around in rotating log files. The experts at Github can take care of that.

I do get to keep my own domain, which was paramount. I don’t have to pay anything, which is nice.

Previously, I used version control to track my site’s contents and rsync over SSH to publish. Now I simply use version control: pushing a change to github is publication.

It’s early days yet, but I’m happy with the change. The feed should settle down now, so stay subscribed. Please let me know if you spot any problems with the site.

Go for short variable names


Recently Brad Fitzpatrick promoted the Go style guide on twitter, which prompted Tim Penhey to take issue with its advice on variable naming.

A brief and inconclusive exchange followed. Twitter’s fine for opinions and one-liners but flawed for discussions — even when the subject happens to be brevity.

I’m not going to tweet about it, but I like Go and I like its style. I’d rather read code which uses short variable names. Long descriptive names, which may appear to provide more information, often obscure the structure and flow of the code. The narrower the scope, the shorter names can be; so the style guide implicitly sanctions short functions and shuns globals. All good.

How short is short?

Those variables are certainly short rather than descriptive but they aren’t scary: c could be a character; ch for channel, maybe, or another character; d for data, difference or distance; m, midpoint; s string; st state. All guesses, of course, but in context I’d expect to see — in the space of just a few lines of code — where each variable lives and how it’s used, a more clear and correct indication of what it means than a lengthy name could ever be.

Single character variables are just fine, says the style guide:

Prefer c to lineCount. Prefer i to sliceIndex.

Some languages allow you to go further. Omit a variable in Perl and it defaults to being what you’d like it to be. Usually.

A single character variable name is an extreme form of abbreviation. It works nicely for small things, like pixels and characters.

Pixel pixel, pel, px, p;
Character character, char, ch, c;

Some less terse abbreviations hurt my ear: mngr, svr, cnt.

The style guide is, after all, a guide, and common sense applies. Some abbreviations are ugly and others save so little space they aren’t worth it.

Go’s advice on naming tips a hat to the language’s C heritage and to C’s great application, UNIX, which is unsurprising when you realise one of Go’s prominent contributors, Ken Thompson, had a hand in both. When Thompson was asked what he would do differently if he were redesigning UNIX he replied:

I’d spell creat with an “e”.

So, working on Go, he did.

You wait all day for a bus…

2013-10-02, Comments

Any and all didn’t appear in Python until version 2.5, released in 2006, when the language was already well into its teens.

Why the delay in offering such fundamental functions? An oversight? Or simply that they’re so easy to implement they weren’t thought necessary. Either way, they’re here now.

The functions are closely related and complementary. We can define any in terms of all and vice-versa.

def any_(xs):
    return not all(map(operator.not_, xs))

def all_(xs):
    return not any(map(operator.not_, xs))

C++ reached its 30s before introducing its own versions of these logical algorithms, any_of and all_of, but made up for lost time by finding room for a third, none_of, which is not any_of.

template <class Iter, class Pred>
bool none_of_(Iter b, Iter e, Pred p)
    return std::find_if(b, e, p) == e;

template <class Iter, class Pred>
bool any_of_(Iter b, Iter e, Pred p)
    return !none_of_(b, e, p);

template <class Iter, class Pred>
bool all_of_(Iter b, Iter e, Pred p)
    return !any_of_(b, e, std::not1(p));

Reverse, Esrever

2013-09-27, Comments

Reverse is a member of the C++ standard library, but its reverse, esrever, isn’t. Similarly keep isn’t but peek is.

Can anyone think of a C++ standard library member whose reverse is also a member?

Answers in the comments below.


Clown, Flee, Jump


The clown is running away from the circus. The contortionist wants nothing more to do with him. She’s confessed everything to her husband, the strongman, who’s after the clown’s blood. The clown has no time to pack. Hurrying from the big top he snatches up his most treasured possessions and some refreshments:

  • a makeup case
  • a box camera, with tripod attached
  • a cactus
  • a roasted goose
  • a magnum of champagne

Each item weighs exactly 3kg.

Soon he reaches the edge of a ravine. A rope bridge connected to the other side has a sign in front of it.


The bridge is 100m long. The clown weighs 70kg. The strongman, who’s closing in, weighs considerably more. The clown must cross the bridge at once to effect his escape. How can he do so without abandoning any of his baggage?


Elsewhere, it’s school sports day. Conditions are perfect for the high jump — warm, sunny, still — and a talented young athlete has raised the bar to 1.85m, which happens to be his own height. On the first two attempts he fails. On the third attempt he succeeds.

High Jump

“Chapeau!” says the French teacher.

“Awesome!” says the Chaplain.

“Unbelievable!” says the head of Mathematics.

“Actually,” the sports coach says, “it’s quite simple: a combination of talent, training, and technique. He cleared his own height but his centre of gravity didn’t.”

“What nonsense!” says the mathematician.



The clown barely breaks stride. Juggling with mismatched objects is part of his act and quick as a flash case, camera, cactus, fowl and fizz are in the air. At no point does he have more than one object in either hand so his weight never exceeds 73kg. The bridge holds. The clown gets away.

“Grrrr!” says the strongman, shaking his fists.


Angle brackets hurt my eyes

2013-05-01, Comments

“All these angle brackets, they’re hurting my eyes.”

I’m quoting a colleague who sits next to me and I sympathise. Perhaps a section from a build file was causing him pain?

<target name="all" description="Build product.">
  <mkdir dir="build" if="${not directory::exists('build')}" />

Or maybe it was some XSL to generate XML by applying XML to XML?

<xsl:if test="$index-make or @index!='false'">
  <xsl:if test="//index">
    <fo:block text-align-last="justify" space-before="5pt">
      <fo:basic-link internal-destination="index-page">
          <xsl:when test="(/doc/@lang = 'ja') or (/doc/@lang = '')</xsl:when>
      <fo:page-number-citation ref-id="index-page"/>

Or perhaps he was wrestling with a C++ compile error.

print.cpp: In function 'void print(const
std::map<std::basic_string<char, std::char_traits<char>,
std::allocator<char> >, int, std::less<std::basic_string<char,
std::char_traits<char>, std::allocator<char> > >,
std::allocator<std::pair<const std::basic_string<char,
std::char_traits<char>, std::allocator<char> >, int> > >&)':
print.cpp:7: error: no match for 'operator<<' in 'std::cout <<
details' /usr/include/c++/4.2.1/ostream:112: note: candidates are:
std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT,
_Traits>::operator<<(std::basic_ostream<_CharT, _Traits>&
(*)(std::basic_ostream<_CharT, _Traits>&)) [with _CharT = char,
_Traits = std::char_traits<char>]

What makes angle brackets so vexatious. Could it be their sharp corners?

Lisp is as elegant as XML is prickly, yet it too is famous for brackets, albeit round ones — and lots of them.

Imagine a parallel universe with angle-bracket Lisp. I wonder if the language would have proved so enduring?
Not so pretty now?
<define <euler-transform s>
  <let <<s0 <stream-ref s 0>>
        <s1 <stream-ref s 1>>    
        <s2 <stream-ref s 2>>>
    <cons-stream <- s2 </ <square <- s2 s1>>
                          <+ s0 <* -2 s1> s2>>>
                 <euler-transform <stream-cdr s>>>>>

Looking more closely at the first code snippet, the section from a build file, we can see the problem isn’t so much the angle brackets as all the others.
<mkdir dir="build" if="${not directory::exists('build')}" />

The braces and parentheses may be embedded in a double quoted string but that only makes things worse. The nested levels hurt my eyes and, if you can bear to look at the code long enough, you’ll realise there’s a simpler message trying to get out: mkdir -p build.


2013-04-17, , Comments

I like working around enthusiasts and optimists but that doesn’t mean I want to use chirpy or bombastic software.

These days I build programs using visual studio. Sure, it’s a decent tool but part of me cringes every time I’m asked to open a “Solution” especially when what I get seems more like a “Muddle”. A progress bar slides into action after my program links: “Updating Intellisense …”

Who coined that galumphing portmanteau? It means auto-completion and I don’t want to know it’s going on — especially since I edit code using Emacs.

Perforce is new to me and I lean on a graphical client so heavily I sometimes trip over it. So when I’m trying to dance round client workspaces and their half-baked integration with microsoft muddles solutions, the last thing I want is to be asked to “Choose a Favorite Connection”. When it comes to Perforce Servers I don’t have favourites let alone favorites. Sorry.

ACCU 2013

2013-04-11, , Comments

ACCU 2013

Brian Marick opened the second day of ACCU 2013 with a keynote presentation entitled: “Cheating Decline: Acting now to let you program well for a really long time”. He drew a distinction between effortful and automatic thinking. For example, we can drive a car along a clear road automatically but it requires considerable concentration to parallel park that same car. By tuning out unwanted signals crickets can locate their mates using minimal brainpower, and cricket players have no need of Newtonian dynamics to track the trajectory of a ball — they apply a simple visual self-calibrating algorithm to catch out batsmen. Tango dancers disturb and re-establish invariants. A robot can walk for hours without thinking about what it’s doing. Actually, if it’s your job to park cars, you can probably do that without thinking; and this was Brian Marick’s main cheat — find the work practices which allow you to lean on your perceptions and so avoid effortful thinking.

Anthony Williams’ talk on C++11 Features and Real World code did require effortful thinking but that was what I’d hoped for. He provided a concise and expert summary of the new language features in action, focusing on the biggest early winners. Leading with auto, lambda, range-for, he went on to talk about concurrency and move semantics. I learned that lambda functions can have a mutable qualifier. Ha!

I couldn’t resist Olve Maudal’s C++11 pub quiz, appropriately held in the Marriot Hotel bar, for which we formed teams and mentally compiled and executed dodgy code, capturing standard output on an answer sheet. Some of the answers may well have have been implementation dependent but Olve specified an implementation: our answers should match this laptop running this software. I was simultaneously appalled by the limits of my knowledge on fundamental subjects such as integral promotion and initialisation order, and surprised by my ability to correctly predict the behaviour of some esoteric and perverse code. I’m chastened and will be studying the answers in the cold light of day. Brian Marick may have advocated programming after a beer or two in his morning session, but the afternoon pub quiz proved that coffee works better for me!

A programmer’s dozen (13, which is 12 counting from zero!) lightning talks kept the day crackling with energy. Ewan Milne chaired the session expertly, adeptly dispatching a birthday cake as proceedings commenced. I wish I could describe all the talks but you really had to be there. Phil Nash’s use of the little known left arrow operator ← got a well deserved response from the audience. Sander Hoogendoorn stuck the boot into “Agile Fluffiness”. James Grenning’s talk on embedded development was a lightning keynote: hilarious, moving and, ultimately, tragic.

An Exploration of the Phenomenology of Software Development


I was lucky to be in the audience last week, when Charles Tolman visited Accu Bristol to preview his Accu 2013 conference talk: “An Exploration of the Phenomenology of Software Development”.

The talk precis is entirely accurate — but maybe not so helpful. What this talk actually delivers is a highly original and thoughtful examination of what the software development revolution is and how we can make sense of it.

Charles Tolman’s central insight is that we have crossed a boundary. New tools and technologies have extended our powers. Just as machinery developed in the industrial revolution extended our physical abilities, so software developed in the information technology revolution extends our capacity for thought.

Essentially, software development is thinking, so to analyse it we need to think about thought. With this realisation we can view our industry as a continuation of the efforts of earlier thought workers — philosophers such as Descartes and Goethe.

If this all sounds a bit heavy, Charles made space for anecdotes, humourous insights and pictures of gliders. I look forward to future episodes.

Patience Sorted

2013-03-14, , Comments

I gave a lightning talk today about patience sorting and its application to the longest increasing subsequence problem. It’s a subject I’ve written about before. My computer has been put through several million simulations. I’ve even coded up a javascript demo which deals out virtual playing cards and sorts them at the click of a button.

Ace of Spades 10 of Spades 6 of Spades 7 of Spades 5 of Spades King of Spades 9 of Spades Queen of Spades 2 of Spades 8 of Spades 4 of Spades 3 of Spades Knave of Spades

Today I used real playing cards; a linen-finished standard deck. For any talk it’s nice to have a prop.

Rock & Pop Legends

Now, I thought I understood the patience sort algorithm but until yesterday I’d never actually played it with real cards. I’ve been surprised by how much this physical dimension has developed my understanding.

I’ll be testing my new cards on some other sorting algorithms; I have high hopes. It would be good to find a similarly simple prop for linked data structures so I can balance trees, flip lists and walk graphs.