Preamble

Recently I found myself in the delightful situation of wanting to rebuild my home network from scratch — again. This time, however, I decided to do it properly, so I literally broke everything.

I’ll spare you the details of my home setup and the nightmares I haven’t fully emerged from. What I do want to talk about is a side problem I started thinking about during this little exercise in style — the one that had me redefining VLANs, SSIDs, passwords, firewall rules, and so on, so let’s talk about the real problem.

Among the various devices I reluctantly had to reconfigure, I ended up having to set up an old smart TV whose graphical interface isn’t exactly a triumph of design, and whose technology is, let’s say, quaint. As you can imagine, this TV — for reasons still unknown to science — sits just outside my wired network perimeter, and therefore requires one and only one valid alternative: setting up Wi-Fi. By hand. With a remote control.

Two things happened.

First, I started to count clicks. Not just the obvious “letter under cursor → letter I need” hops, but the hidden ones: pressing Shift to wake up the uppercase, switching to the 123 layer for a digit, switching again to #+= for a less common symbol, then back to abc. The “type one character” cost suddenly looks a lot less like one click per glyph and a lot more like one to fifteen clicks, depending on where you are and where you need to go.

Second, I started to wonder whether I was the problem. Different remotes ship different keyboard layouts — alphabetical grids, QWERTY grids, Apple-TV-style single rows, T9 phone pads. Some let you wrap from the rightmost key back to the leftmost on the same row. Some bump you to the next row instead. Some refuse to wrap at all. Each of these choices changes the click cost of the same password, sometimes by a lot.

That’s the question I want to model: what’s the real cost of typing a password on a remote control? 1.

The problem, formally

I (we?) basically press five things on a TV remote:

↑ ↓ ← → OK.

Every press is one click. My goal is to end up with a string of characters in the input field — say, my Wi-Fi password S7r0ng!Pass — by emitting them in order. The question is: what is the minimum number of clicks?

If there were no layer switches, no caps lock, no wrapping policy, and the keyboard were a single \(r \times c\) grid containing exactly the alphabet I needed, the answer would be uninteresting. For each character I would walk Manhattan-distance steps and press OK. The total click cost for a string \(s = c_1 c_2 \ldots c_n\) starting at position \(p_0 = (r_0, k_0)\) would be

$$ \mathrm{cost}(s) \;=\;\sum_{i=1}^{n} \left( \big| r(c_i) - r(c_{i-1}) \big| + \big| k(c_i) - k(c_{i-1}) \big| + 1 \right) $$

with \((r(c), k(c))\) the row and column of \(c\) on the grid, and the \(+1\) for the OK click that emits \(c_i\).

What makes the real problem interesting is that on-screen TV keyboards are not a single grid. They are a small family of 2D grids — layers — bridged by special keys. To type S7r0ng!Pass on a typical smart-TV QWERTY you need at least:

  • the letters layer for r, n, g, a, s, s;
  • a caps-lock toggle for the leading S and P, plus a way back if caps is sticky;
  • the numbers layer for 7 and 0;
  • the symbols layer for ! (or sometimes the numbers layer, depending on the TV);
  • and, for each layer change, a click on the layer-switch key, which itself has to be navigated to.

The cost is no longer “Manhattan + 1”. It is the total weight of a path through a much bigger graph — and you don’t yet know how big.

ps: a note on caps lock before moving on. The model in this post treats the toggle as a single click on, single click off — one click flips the bit, the next flips it back. Real on-screen keyboards often promote a one-shot toggle to a sticky lock on a second click, requiring a third click to disable. We’re deliberately ignoring that nuance: for a password of length \(|s| \le 40\) containing \(m\) blocks of consecutive uppercase letters with \(0 \le m \le 4\), the worst-case extra cost from missing the sticky promotion is at most \(2m\) extra clicks — under ten total against passwords whose cost is already in the dozens. System-level it doesn’t move the needle. The companion CLI does model the full three-state FSM (Off → OneShot → Sticky → Off) and lets you see the difference if you want to.

Three layouts to argue about

Before getting to the graph, three concrete examples — the ones I keep meeting on actual devices. These are simplified ASCII renderings; real ones have more chrome, but the topology is what matters.

A. Smart-TV QWERTY (4×10), with caps + 123 + #+= layers. The most common one. The [123] and [#+=] keys live in the corners of each layer, which is good for muscle memory but bad for cost when you’re far from them. The [⇧] key is a sticky caps-lock on most modern TVs.

[layer: letters]
  q  w  e  r  t  y  u  i  o  p
  a  s  d  f  g  h  j  k  l [123]
 [⇧] z  x  c  v  b  n  m  ,  .
 [_______ space _______]

[layer: numbers]
  1  2  3  4  5  6  7  8  9  0
  -  /  :  ;  (  )  $  &  @  "
 [#+=] .  ,  ?  !  '  _  +  =  [abc]
 [_______ space _______]

[layer: symbols]
  [  ]  {  }  #  %  ^  *  +  =
  _  \  |  ~  <  >  €  £  ¥  •
 [123] .  ,  ?  !  '  "  `  §  [abc]
 [_______ space _______]

B. Alphabetical grid (5×6). Found on older TVs and some game consoles. Great for non-touch-typists, terrible for everyone else — the most common letters end up scattered far apart because the alphabet doesn’t care about frequency.

[layer: letters]
  a  b  c  d  e  f
  g  h  i  j  k  l
  m  n  o  p  q  r
  s  t  u  v  w  x
  y  z [⇧][123][↩][⌫]

C. Apple-TV-style single row. All letters on one horizontal strip, with separate strips for digits and symbols. Always-on wrap-around — pressing from a lands you on z. Geometrically a 1D ring; algorithmically still a graph, but a much smaller one.

[layer: letters]
  a b c d e f g h i j k l m n o p q r s t u v w x y z

[layer: numbers]
  0 1 2 3 4 5 6 7 8 9

[layer: symbols]
  . , ! ? ' " - _ ( ) [ ] { } < > / \ | + = * & ^ % $ # @ ~ ` :

Yes ok modern Apple TV remotes have a microphone — but somewhere downstream, something or the AI still has to type the password, right? (joking)

At a high level, none of these three is uniformly better than the others. A is fast for password-like strings full of common letters but punishes layer switches. B punishes everything but is predictable, which has its own uses as a baseline. C has the lowest movement cost per character on average — wrap-around makes the worst-case horizontal distance only \(\lfloor n/2 \rfloor\) instead of \(n-1\) — but every layer switch costs you a long horizontal trip to the switch key.

Before we can reason about the graph, one more dimension needs to be on the table.

Wrap policy is part of the layout

Three wrap policies cover what I’ve seen in the wild, and the click cost depends on which one you’re playing under:

  • WrapNone. Cursor stops at edges. from the rightmost column of a row is a no-op (or rejected, depending on the implementation).
  • WrapRow. Horizontal wrap stays on the same row, vertical wrap stays on the same column. from (2, 9) lands on (2, 0). This is the most common choice on smart-TV QWERTYs (see images just below for a one layer/three layer representations of the three toruses);
    Letters layer as a torus Letters layer as a torus
    Three layers, three toruses Three layers, three toruses
  • WrapGrid. Wrap is computed on the linearised grid: from the last cell of a row lands on the first cell of the next row, and from the last cell of the last row lands on (0, 0). Less common, but the snake-wrap shortcut lets the cursor cross row boundaries via horizontal moves — something WrapRow’s torus topology doesn’t allow. The trade-off is asymmetric: WrapGrid pays one extra move on the row-end → row-start case that WrapRow handles in a single hop, but on most non-square layouts it pays back with a smaller diameter (we will see why this matters in a while more formally).
    Letters layer as a torus Letters layer as a torus

Modelling: a graph of cursor states

My first natural-but-wrong model was the “graph of keys”: one node per key, edges for the four directions, weight 1, plus an OK action that emits the current key’s character. This model is too weak — it loses two things that change typing cost dramatically, and both of them are already on the list of complications I gave above:

  • caps-lock state. Standing on a with caps on emits A, with caps off emits a. They are different outputs from the same physical position, and getting from one to the other costs a round trip to [⇧].
  • layer. Standing on row 0, column 0 of the letters layer (q) is a different node than row 0, column 0 of the numbers layer (1), and you can’t move between them with ↑↓←→ — only by pressing OK on a layer-switch key.

The right model is a graph of cursor states, where

  • A vertex is a cursor state:
$$ s \;=\; (\ell, r, c, \kappa) $$

with \(\ell\) the active layer, \((r, c)\) the cursor position inside that layer, and \(\kappa\) the caps-lock flag.

  • An edge is a single click on the remote, connecting two states.
  • A graph is strongly connected when every vertex can reach every other by following edges. For a typing keyboard this is non-negotiable: if it isn’t, some character is unreachable from some state, which is the same as saying “this layout can’t type some passwords starting from some positions”.
  • The distance between two vertices is the length of the shortest path between them — exactly what Dijkstra returns.
  • The diameter of a graph is the maximum distance between any two vertices: the worst-case shortest path, the answer to How bad can it get?

From any state — note this is a state, not just a key — exactly five edges leave, as we already noted:

  • four of them are directional moves constrained by the layout’s wrap policy;
  • one is the OK edge whose effect depends on the key under the cursor — emit a glyph, toggle caps, or jump to another layer.

Every edge has uniform cost 1. The state space stays relatively small — for the QWERTY above it’s \(3 \cdot 4 \cdot 10 \cdot 2 = 240\) nodes — so the search is… cheap (hopefully). But would you have guessed that an alphabet of 26 letters, plus another 26 or so symbols and numbers, would end up creating a space of more than two hundred nodes? I certainly wouldn’t have.

With the graph in hand we can finally start to answer the original question — what’s the real cost of typing a password? Typing a string of length \(n\) becomes \(n\) shortest-path searches stitched together: from the current state, find the cheapest path to any state where pressing OK emits the next character.

So the question about the diameter of the graph stops being why diameter matters — the diameter is, almost by definition, the longest cost between two consecutive character emissions — and becomes:

How much a diameter can influence the problem modelling.

Given diameter it’s the ceiling on the cost of a single character emission, we can state that if the diameter is \(D\), then regardless of where the cursor sits and what character I need next, the search spends at most \(D + 1\) clicks (the \(+1\) is the OK that emits).

We can refine the first equation we wrote down for cost:

$$ \mathrm{cost}(s) \;=\;\sum_{i=1}^{n} \left( \big| r(c_i) - r(c_{i-1}) \big| + \big| k(c_i) - k(c_{i-1}) \big| + 1 \right) $$

by attaching a worst-case ceiling to it. For a string of length \(n\), the total cost is bounded by \(n \cdot (D + 1)\) — a loose bound, since most characters cost much less than the diameter, but a useful one: it tells you immediately whether a layout has any chance of being usable. Diameter 30 produces passwords that take minutes to type. Diameter 6 doesn’t.

$$ \mathrm{cost}(s) \;=\; \sum_{i=1}^{|s|} \bigl( d_L(\sigma_{i-1}, \sigma_i) + 1 \bigr) \;\le\; |s| \cdot (D_L + 1) $$

Don’t panic — this is the same shape we already had, written more honestly. The left-hand side is the actual per-password cost: a sum of state-to-state shortest-path distances \(d_L(\sigma_{i-1}, \sigma_i)\) — exactly what Dijkstra returns for each sub-search — plus one OK click per character. The right-hand side is the layout-only worst case, a ceiling that depends on \(L\) but not on the specific string \(s\). The gap between the two sides is precisely what makes comparing layouts meaningful, and it is the gap we’ll exploit to define what the “typing complexity” of a password actually means.

This is the bit I want to defend, because it’s the one design choice that makes everything else easy: a graph of keys model would force layer switches and caps lock to live as global mode flags, mutated as side effects of OK. Once you have that, every search has to know about the global state, every heuristic has to peek at it, and the implementation rapidly turns into a state machine pretending to be a graph. Promoting layer and caps into the node identity costs you a 3/6× blow-up in node count — nothing on a problem this small — and buys you a clean shortest-path setup that any textbook algorithm just works on.

Before moving on to optimisation — premature optimisation being the root of all evil2 — let’s first nail down the graph itself.

Why A* is interesting even though Dijkstra is fine

Fortunately, the brainiacs at NASA put their best engineers to work on this problem, and the only thing they could tell me was: “Do you remember graph theory?” 3. So we need to take a step back and review it: there’s a guy here named Dijkstra 4 and his savvy little brother named A* 5. The first is an algorithm that explores a graph uniformly in all directions (ideally perfect if the destination is unknown), while A* is the same algorithm, but it introduces a heuristic to direct the search toward the goal, generally making it faster. An example used to explain this to me years ago is that of a fugitive fleeing from the police in a city chase: to make a long story short, if the cops chasing him have studied Algorithms and Data Structures, things look bad for the fugitive.

Truth is for a graph this size Dijkstra is more than fast enough. So why bother with A*?

Because A* gives us a place to make the layout’s structure legible to the algorithm. The heuristic is where you encode “I, the user, can see the whole keyboard even though my cursor only stands on one key” — which is exactly the asymmetry between you and the on-screen state machine when you’re actually typing.

A simple admissible heuristic: for a target rune \(c\), return the Manhattan distance from the current state’s \((r, c)\) to the nearest key in the current layer that emits \(c\) given the current caps state, or 0 if no such key exists in this layer. It’s admissible (every move costs at least 1, every position needs at least Manhattan moves to reach the target), but it’s lazy: cross-layer cases get \(h = 0\), which throws away information that you, the human user, demonstrably have — especially if you’ve typed the password a hundred times before getting it right.

A tighter heuristic uses the fact that we know the layout topology up front. For each pair of layers \((\ell_a, \ell_b)\) we can precompute a lower bound on the cost of switching between them — at minimum, reach the nearest layer-switch key + 1 OK + reach the target in the new layer.

I would say a good heuristic stitches these:

$$h(s, c) = d_{\text{within}}(s, c)$$

if \(c\) is reachable in \(s\)’s layer, else

$$d_{\text{to-switch}}(s) + 1 + d_{\text{from-arrival}}(c)$$

This is still admissible — it lower-bounds the true cost — and it’s informed enough that A* will visit drastically fewer states than Dijkstra on layouts with multiple layers.

The takeaway: A* is worth as much as the heuristic you put into it, and the heuristic is exactly where the layout’s structure earns its keep. The CLI ships both Dijkstra and A*, but uses Dijkstra by default in this post — the loose cross-layer heuristic above can let A* return suboptimal plans on multi-layer passwords, a known limitation of the implementation rather than a property of the algorithm.

From diameter to a typing-complexity metric

Real keyboard layouts are public knowledge. An attacker who knows you’re on a QWERTY smart-TV grid also knows dasda, qwerty, and 1234 are cheap to type — adjacent characters, minimal movement, trivially guessable. Arbitrarily distant symbols are the opposite: slow to type, hard to predict from the layout alone, but annoying for a human. I want to give that tension a mathematical shape.

Three pieces, in order.

Shannon entropy of the character distribution. Let \(\mathcal{A}(s)\) be the set of distinct characters in \(s\) and \(p_c\) the empirical frequency of each:

$$ H(s) \;=\; -\sum_{c \in \mathcal{A}(s)} p_c \log_2 p_c $$

Plug in a few passwords to build intuition:

  • aaaaaa — one unique character, \(p_a = 1\). So \(H = -1 \cdot \log_2 1 = 0\) bits. Zero entropy, as expected: the “all same character” password is perfectly predictable.
  • ababab — two characters, \(p_a = p_b = 1/2\). \(H = -2 \cdot \tfrac{1}{2} \log_2 \tfrac{1}{2} = 1\) bit. One bit of entropy per password, which is nothing.
  • abcdef — six distinct characters uniform, \(p_c = 1/6\) for each. \(H = -6 \cdot \tfrac{1}{6} \log_2 \tfrac{1}{6} = \log_2 6 \approx 2.585\) bits. That’s the ceiling for a six-character password over a six-symbol alphabet.

\(H\) ignores everything a potential attacker of a password might know about the world (dictionaries, keyboard-adjacency priors, dates) — it is a floor check, not a ceiling. It just rules out the dumbest failure modes, and it’s totally out of scope thinking about this as a real dimension of safety of a password. It’s just… well, entropy.

Topological dispersion. Same password, but now let’s focus on a measure of how far the cursor travels between consecutive characters on a given layout \(L\). Let \(\mathrm{pos}_L(c)\) be the key position of \(c\):

$$ T(s, L) \;=\; \frac{1}{|s|-1} \sum_{i=2}^{|s|} d_L\bigl(\mathrm{pos}_L(c_{i-1}),\, \mathrm{pos}_L(c_i)\bigr) $$

Same tour, on the QWERTY 4×10 with WrapRow:

  • aaaaaa — every consecutive pair is the same key, \(d_L = 0\) for each, so \(T = 0/5 = 0\).
  • qwerty — each consecutive pair is one key apart (that’s literally where the layout gets its name). \(T = (1 + 1 + 1 + 1 + 1)/5 = 1\).
  • qcnp — the cursor actually walks. \(d_L(q, c) = 5\), \(d_L(c, n) = 3\), \(d_L(n, p) = 5\). \(T = (5 + 3 + 5)/3 \approx 4.33\).

Normalise by the diameter so the number is comparable across layouts:

$$ \tilde T(s, L) \;=\; \frac{T(s, L)}{D_L} \in [0, 1] $$

The WrapRow 4×10 has diameter \(D_L = 7\) (we computed it earlier). So aaaaaa gives \(\tilde T = 0\), qwerty gives \(\tilde T = 1/7 \approx 0.14\), and qcnp gives \(\tilde T \approx 0.62\). Low \(\tilde T\) is the qwerty / dasda / 1234 failure mode — geometric adjacency, independent of entropy.

Typing-complexity score. Multiply entropy by normalised dispersion, divide by cost per character:

$$ \Psi(s, L) \;=\; \frac{H(s) \cdot \tilde T(s, L)}{\mathrm{cost}(s, L) \,/\, |s|} $$

Walk the examples through again:

  • aaaaaa — \(H = 0\), so \(\Psi = 0\) regardless of how fast it is to type. Entropy-zero kills the score on its own.
  • qwerty — \(H \approx 2.585\), \(\tilde T \approx 0.14\). The cost on QWERTY is roughly 11 clicks (start already on q, five right moves, six OKs), so clicks per character is about \(11/6 \approx 1.83\). Plug in: \(\Psi \approx (2.585 \times 0.14) / 1.83 \approx 0.20\). Decent entropy, bad dispersion, mediocre score — the two factors in the numerator catch the layout-adjacency failure that \(H\) alone misses.
  • qcnp — \(H \approx 2\), \(\tilde T \approx 0.62\), but the cost jumps to around 20+ clicks because the cursor genuinely walks. Clicks per character \(\approx 5\). \(\Psi \approx (2 \times 0.62) / 5 \approx 0.25\). Slightly better than qwerty, and only because the dispersion pulls the numerator up faster than the cost pulls the denominator. That is the trade-off the metric is trying to make visible.

If you prefer an additive trade-off instead of multiplicative, use \(H(s) - \lambda \cdot (D_L - T(s, L))\) and tune \(\lambda\). Same spirit, softer edges.

Wrap policy enters \(\Psi\) implicitly through both \(d_L\) (shapes \(T\)) and \(D_L\). The same password scores differently on WrapNone, WrapRow, and WrapGrid for identical key positions. That’s the point — \(\Psi\) is a function of the layout, not the password alone.

For the same reason as in the opening section, the \(\mathrm{cost}(s, L)\) used here treats the caps toggle as a binary action — one click on, one click off. The CLI is more honest about it.

What this metric is actually telling you

\(\Psi\) is not a security score. It carries no attacker priors, knows nothing about dictionaries or keyboard-adjacency Markov chains, and says nothing about hashing or rainbow tables. If you need a credential-stuffing oracle, use zxcvbn. What \(\Psi\) is instead is a layout-aware trade-off between typing experience and guess resistance, measured entirely inside the graph we’ve been building. Its job is to make two failure modes visible when you’re choosing a password for a specific keyboard.

Those two failure modes are why the formula looks the way it does. The all-same-character case (aaaaaa, 111111) sends \(H = 0\) and collapses the product to zero. The layout-adjacency case (qwerty, dasda, 1234) sends \(\tilde T \to 0\) because every consecutive character is physically close on the grid — \(\Psi\) goes to zero regardless of the entropy number. The qwerty example is canonical: \(H \approx 2.585\) bits sounds respectable until the cursor-barely-moves reality arrives. Both failure modes produce \(\Psi = 0\) independently.

The multiplicative form \(H \cdot \tilde T\) means either failure alone is enough to kill the score — no compensation allowed. The additive \(\lambda\) variant mentioned earlier is softer and lets high entropy partially redeem low dispersion; use it when you want nuance rather than hard rejection. Both are implemented in the CLI.

Computing \(H\), \(T\), \(\tilde T\), \(\Psi\), and the per-character cost by hand is the kind of thing you do once to convince yourself the formula behaves. For anything beyond two examples, you reach for code.

The numbers above were not computed on a napkin

The worked examples in this section were validated by running the go-pathfinder CLI — the companion repository for this post. The \(H = 0\) for aaaaaa, the \(T = 1\) for qwerty, the \(\Psi \approx 0.20\), the click counts — all of them come out of the same graph model described above, run through code. Hand-computation is the thing you do once to understand; the CLI is what you use when you want to compare 50 candidate passwords or rank 5 candidate layouts against each other without losing your mind.

To reproduce the numbers yourself:

# plan and total clicks
go-pathfinder -text "qwerty" -layout qwerty

# same plan plus H, T, T̃, Ψ in a table — "clicks" in the output is cost(s, L)
go-pathfinder -text "qwerty" -layout qwerty -metrics

# try a different layout / wrap policy
go-pathfinder -text "qwerty" -layout alphabetical -wrap none -metrics

There is also a -sim mode that animates the cursor moving through the optimal plan in the terminal — useful for seeing what an apparently-cheap password actually does to your thumb on a real remote.

So what do you actually do with this?

Two questions follow naturally from the metric, and both are tractable searches over the same (layer, row, col, caps) state graph we’ve been building throughout the post.

Q1. Given a layout \(L\) and a click budget \(k\), what is the most entropic password I can type?

This is a constrained optimisation over the state graph: find a string \(s\) such that \(\mathrm{cost}(s, L) \le k\) and \(\Psi(s, L)\) is maximised. One practical approach is bounded BFS over the joint space of (cursor state, accumulated string), pruning any branch whose cumulative click count exceeds \(k\). A greedier variant picks the next character at each step by maximising \(\Delta H\) per additional click — it’s not globally optimal but it’s fast and the results are good enough to be interesting. The metric naturally applies its own pressure here: very short strings hit the \(H\) ceiling quickly (few distinct characters, entropy saturates), while very long strings inflate the cost denominator and drag \(\Psi\) down. Under realistic budgets — call it 60–120 clicks for a TV-typing session — the optimum tends to land around length 8–12, drawn from layout-distant character pairs, with as few caps toggles as the string allows.

Q2. Given a password \(s\) I already want to type, which layout \(L\) gives me the best \(\Psi(s, L)\)?

Here the string is fixed and the layout is the variable. The search space is small enough to brute-force: enumerate the canonical layout families (QWERTY × {WrapNone, WrapRow, WrapGrid}, alphabetical grid, Apple-TV single-row) and compute \(\Psi\) for each. What you quickly find is that there is no single winner — the answer is a mapping from password class to best layout. QWERTY-WrapRow wins for letter-heavy passwords that hit common keys and avoid layer switches. Alphabetical-grid is roughly the worst-case bound: terrible average cost, predictable variance, useful mainly as a baseline. Apple-TV single-row wins for short letter-only passwords because wrap-around keeps horizontal distances below \(\lfloor n/2 \rfloor\), but it loses badly on mixed character classes because every layer switch requires a long horizontal trip to the switch key — exactly the cost structure that \(\Psi\)’s denominator punishes.

Conclusion

The exercise I started with — typing a Wi-Fi password on a TV remote with nothing but four arrow keys and OK — turns out to be a surprisingly rich graph problem once you stop treating the keyboard as a flat list of letters. The metric doesn’t tell you what password to use in a security sense; it tells you whether the password you’ve chosen is going to hurt as much as you fear on the specific physical device in front of you, and whether a different layout would hurt less. That’s a smaller question than it sounds, but it’s the right one to ask when your thumb is already tired and you’re three typos deep into a 12-character mixed-case Wi-Fi key.

If you want to poke at the numbers yourself — change a layout, swap a wrap policy, score your own passwords against \(\Psi\), or watch the cursor walk through your Wi-Fi key in animated -sim mode — the implementation is at github.com/made2591/go-pathfinder. The README has the full flag reference and the same example invocations cited above.


  1. The whole thing — the graph model, the solvers, the metrics, the animated -sim mode used to record the GIF in the README — is open-source at github.com/made2591/go-pathfinder. Clone it, make build, and you can follow along by running every example in this post locally. ↩︎

  2. From my personal collection of quotes↩︎

  3. This is just paraphrasing The Martian, 2015 ↩︎

  4. Dijkstra’s algorithm ↩︎

  5. A* algorithm ↩︎