Arguing for Immortality

I want to live forever. I’ve always thought that not dying was a pretty obvious thing to want. To my surprise, I’ve found that a lot of people whom I usually agree with on most topics strongly disagree with me on this one. Rather than write yet another piece extolling the virtues of a far-future post-scarcity post-singularity world, I thought I’d just document some of the objections to immortality I get and my counterarguments.

Note that for the purposes of giving my conversational partners opportunities to disagree, I typically posit a form of immortality where you, and you alone are presented with the option of eternal youth with no suicide option. You constantly regenerate to perfect health at the prime of your life. There are a lot of potential ways we might go about not dying, but people tend to find objections to this particular flavor more readily than the others. Please assume this working definition for the below.

Watching your loved ones die

I get this one the most. The argument goes like

I can’t stand the thought of having to watch everyone I love die. Can you imagine living forever with everyone you used to know gone? How could you deal with the pain of building lifelong relationships and then seeing them disappear, forever?

My response to this is essentially, “Newsflash, you already are going to have to deal with that.” If you are above average in caution, intelligence, or just plain old fitness, odds are you are going to have to watch a fair share of your friends die and still continue living for a significant period of time afterwards. Statistics don’t lie. Accidents and cancer, while tragic, are inevitable.

As for having to carry the burden forever, we can again look to reality to answer this objection. “Time heals all wounds,” is an adage for a reason. If you can deal with a friend being dead for twenty years (as people already do), it seems pretty likely you’ll be able to deal with it for an indefinite future. As you live forever, you’ll make new friends and get over the loss of old ones. It seems unlikely that it will be year 2769 of missing Fred that finally pushes you over the edge, forever. What will probably actually happen is you just get really good at coping.

I’ll get bored and sick of life

This one is simple enough. It goes

I feel awful enough when I’m bored as it is. If I live forever, won’t I eventually just get bored after having done literally everything? I can’t even imagine my life past [X] years.

The way I deal with this is essentially Moore’s Law. Each day, literally a year of video is uploaded to Youtube. Every year, more books than you could possibly read in a regular lifetime are written. As society increases exponentially in complexity, generation of interesting content explodes. It will be impossible for you to do everything when society invents new experiences faster than you can experience them. Sit back and enjoy the ride, you’re immortal.

As for the people who can’t even imagine their long term prospects right now, I say that this makes them perfectly suited for immortality. If you lack a long term vision and focus on the short term, you’ll never be dragged down by your long past. The best mindset for happiness over a long time period is always to enjoy the present.

Death’s inevitability gives life meaning

Now we start waxing philosophic with stuff like,

There is no good without evil. Having death exist gives my life meaning just like scarcity lends gold-backed currencies their value. I need to know that there is an end because it gives me a sense of urgency to my life. Without it I would just stagnate.

To me this is like saying “shitting gives eating meaning.” Does it? Evolution would beg to disagree if it were anything but a blind selection process. When was the last time you sat down for a great meal and said to yourself, “Thank god I’m going to die one day or this meal would be devoid of meaning”? What you do with your life is what lends it value or meaning, not the eventuality of death.

Besides, if you believe in death being necessary to lend your life value, why go with the arbitrary time limit of your natural lifespan? No one who has fielded this argument against me has agreed to just kill themselves when they feel like they’ve basically achieved what they’ve wanted to in life. The fact of the matter is that everyone chooses life over death on a day-to-day basis, and actions speak louder than words.

What about the afterlife?

This one can be touchy, for obvious reasons. Let’s go with the reasonably neutral objection of,

Suppose there is an afterlife. By taking immortality, I would be missing out on a greater truth and the greatest party in or outside of the universe.

Sure, you might be. But most religions (weighted by subscribers) don’t fault you for not dying. If your purpose in living is to live according to your religion, immortality doesn’t get in the way of that. Furthermore, at the end of eternity, which is a construct that should be available to gods, it seems reasonable that you will receive your just reward in any case. Religions also commonly posit Judgment Day scenarios in which all living humans will be placed into various afterlives. It seems reasonable here to assume that if a god so wishes it, your immortality will be revoked.

I’ll be too different from everyone else

There are a couple variations on this theme. In its simplest form,

If I live forever and no one else does, I’ll be endlessly meeting new people with the foreknowledge that I will outlive them. I’ll be so fundamentally different from the average mortal person that I won’t be able to form meaningful relationships. Nobody will understand me!

This one is more of a deal breaker on a case-by-case basis. If a person really feels that they will be well and truly alienated by being immortal, well, it’s hard to tell them that they won’t be. It’s kind of self-fulfilling. That said, there a few things you can say.

  • People actually will understand you

    If there’s anything humans are good at, it’s getting used to stuff. In short order you and your future peers will be cracking wise about your absurd inability to die over whatever the future equivalent of beer is. Maybe mechano-beer. I’m pretty sure that after an eternity of explaining yourself to mortals, you’ll be extremely proficient at helping other people understand who you are.

  • Science will eventually catch up

    Lifespan lengthening technologies only need to make it to the point where they can increase a person’s lifespan one year for every year of research before the masses become practically immortal. Wait long enough and you and your magical wish-granted immortality will be in good company.

  • Being different is an asset

    You can honestly just play your immortality for international celebrity. Make a bid for president. Be rich. Honestly, being too much like other people basically sucks. Being exceptional will make you popular, not the opposite.

Disaster scenarios

This one is kind of morbid. I thought about simply forbidding it by adding a clause to the immortality definition, but it’s interesting enough to keep in. In a few words,

What if everyone but me gets wiped out? I don’t think I could live as the only human being alive. Also, what if the government kidnaps me and uses me as a lab rat? What if I piss off some people enough to get me tortured forever? There might be times where I might actually need to kill myself, even if I would normally find living forever appealing.

Coming up with a bulletproof answer to this one is hard. It’s very tempting to say that no actual implementation of immortality will actually prevent you from dying in literally every case or rob you of the option of killing yourself. But really, that would be cheating.

A real counterargument has to take the form of, “You already run similar risks today, as a mortal.” You can be used as a medical experiment or tortured this very day, and they can keep you alive for a very long time. As an immortal, it’s likely that you will eventually outlive your captors.

Also consider that it’s far more likely that any government or entity with power will want your cooperation instead of your incarceration. You could be the world’s greatest astronaut or the guy who goes into nuclear reactor cores melting down.

However, the extinction of the human race scenario actually does throw a wrench into this argument. To this, all I can really say is that the miniscule risk you run of a mass extinction is just a risk you’ll have to swallow in order to reap the vast rewards of immortality. Maybe there’s other life out there that you can find. Maybe in your spare time you’ll engineer a way to rebuild the world.

Wrap-up

This is a fun topic for me, and I hope I’ve been interesting without being too offensive to people on either side of the coin. This topic is important in the sense that once you accept that immortality is a pretty good idea, you open the door for a lot of other interesting arguments to be made about rationality, ethics, and just general decision making. If you feel I’ve made some sort of egregious error or just want to weigh in for any reason, please feel free to comment below.

EDIT: The Hacker News discussion of this page is fairly interesting. Give it a try, too!

Facebook Puzzle: sophie

This week: Facebook’s sophie puzzle. This one is “buffet” difficulty, which translates roughly to “the underlying problem is NP-complete,” which explains why I have such a hard time choosing food at sushi buffets. In any case, the problem is to find your cat in your apartment, where you know where the cat is likely to be, as well as the transit times between the various locations in your home.

I’ll document here the various bad solutions I came up with on my way to a decent one, and as a bonus: an optimized-ish version in C++!

About the math

The problem asks you to minimize the expected time to find sophie. What does this mean? Take a look at the example input (comments mine).

4
#node name    #probability sophie is there
front_door    .2
in_cabinet    .3
under_bed     .4
behind_blinds .1
5
#node x    #node y       #time between x and y
front_door under_bed     5
under_bed  behind_blinds 9
front_door behind_blinds 5
front_door in_cabinet    2
in_cabinet behind_blinds 6

This says there are four nodes and five edges between those nodes, and that 40% of the time, sophie is going to be under the bed. If sophie was under the bed 100% of the time, the optimal path to minimize the expected time to find her would be just the path that takes you to the bed in the shortest amount of time. But since some nodes are unlikely to hide sophie, you can afford to take your sweet time getting to them.

For this sample input, the optimal path is front_door, in_cabinet, under_bed, behind_blind. Note that to go from in_cabinet to under_bed, you should pass through the front_door node. The expectation for this path is 6.00 seconds, as explained from this snippet from David Eisenstat’s site:

Pr(front_door) * 0
+ Pr(in_cabinet) * Distance(front_door, in_cabinet)
+ Pr(under_bed) * (Distance(front_door, in_cabinet)
                   + Distance(in_cabinet, under_bed))
+ Pr(behind_blinds) * (Distance(front_door, in_cabinet)
                       + Distance(in_cabinet, under_bed)
                       + Distance(under_bed, behind_blinds))
    = .2 * 0 + .3 * 2 + .4 * (2 + 7) + .1 * (2 + 7 + 9) = 6.00

Building the graph

The input only includes edges between particular nodes. In order to know that, say, the distance between the cabinet and the bed is 7 (through the front door), you have to build up the shortest distances between every node in the graph. This is known as the “all pairs shortest path” problem. There exists a quite famous dynamic programming algorithm to solve this, the Floyd Warshall algorithm. Trivially implemented in Ruby:

# note that $weights[x][y] is initialized to either
# Float::MAX if there is no edge between x and y, or
# to whatever the length of the edge is if there is.
def floyd_warshall
    for k in 0..$num-1
    for i in 0..$num-1
    for j in 0..$num-1
        if $weights[i][k] + $weights[k][j] < $weights[i][j]
            $weights[i][j] = $weights[i][k] + $weights[k][j]
            $next[i][j] = k
        end
    end
    end
    end
end

# links menoizes the list of nodes you need to traverse
# between nodes i and j
def links(i, j)
    k = $next[i][j]
    return k if k.class == Set
    $next[j][i] = $next[i][j] = (k.nil? or i == j) ?
        Set.new([]) :
        (links(i, k) + Set.new([k]) + links(k, j))
end

This gives a good starting point for actually trying to start solving the problem.

BFS solution

In my hubris, I figured a breadth first search where you expand on the path with the lowest current expected time would work. Here’s what it looks like:

def solve
    queue = MinHeap.new
    queue.push 0.0, [[0], Set.new((1..$num-1).select {|n| $probs[n] > 0}), 0.0, 0.0]
    while not queue.empty?
        node, remain, time, expected = queue.pop
        if remain.empty?
            p node
            return expected
        end
        # only iterate remaining nodes that you don't need to
        # go through other remaining ndoes to reach
        remain.select {|n|
            (links(node.last, n) & remain).empty?
        }.each do |n|
            new_time = time + $weights[node.last][n]
            new_expected = expected + $probs[n] * new_time
            queue.push new_expected, [node + [n], remain - [n], new_time, new_expected]
        end
    end
end

In my defense I hadn’t yet realized that the sophie problem is a variant of the traveling salesman problem and that a BFS search would take forever on large graphs. This doesn’t work because you incrementally build all the bad paths on your way to finding the first path to complete. Complexity: proportional to the number of paths, or O(n!).

DP Solution

Hitting upon the realization that the problem is a variant of the traveling salesman problem I decided to try the canonical dynamic programming solution to TSP.

The DP solution requires that you build a structure like

C[subset][j]

where subset is some subset of all nodes, and j is a node in that subset. The value of this entry should be the minimum expected time to proceed from node 0 to node j and through all the nodes in the subset. The problem then reduces to finding the minimum of:

C[subset - [j]][i] # for all i in subset

There are some problems here, but first, some code:

def solve
    relevant = (1..$num-1).select {|n| $probs[n] > 0}
    hash = {}
    hash[[0]] = {0 => [0, 0]}
    for size in 1..relevant.size
        relevant.combination(size).each do |subset|
            subset.insert 0, 0
            hash[subset] = {0 => [Float::MAX, Float::MAX]}
            for j in subset
                next if j == 0
                reduced = subset - [j]
                hash[subset][j] = reduced.collect {|i|
                    e, t = hash[reduced][i]
                    t += $weights[i][j]
                    [e + t * $probs[j], t]
                }.min {|a,b| a.first <=> b.first}
            end
        end
    end
    hash[[0] + relevant].values.collect {|x| x.first}.min
end

This works, but isn’t fast. A 17 node graph took me about 10 minutes to finish. The problem is that since subsets aren’t ordered, there is no convenient way to represent them as array indices and you must therefore hash entire subsets. Since there are 2n subsets, and you must compute the path that ends in each node in each subset, which itself requires examining all other previous paths of the subset minus one of its elements (breathe), the complexity here is O(2nn2).

Recursive Backtracking (Pruning) Solution

Backtracking can be thought of as essentially a DFS search with fast failure. For example, say you have found a complete path that you know will give you an expected time of 30 seconds. Now you are attempting to build another path, and halfway through you know your partial expected time is already 31 seconds. You can abandon building this path, saving yourself the hassle of expanding all of that partial path’s children.

Skipping entire subtrees is known as pruning, and you can achieve some pretty massive savings depending on how well you implement it. My solution was to very conservatively estimate the remaining expectation of a partial path. For instance, if you are halfway through a path with a current expected time of 10 seconds, a path length of 20 seconds and you have covered 60% of the places where sophie can be, then even in the perfect case where the next node was 0 seconds away and had a 40% probability of hiding sophie, you would still incur an additional 20 * .4 = .8 seconds of expected time. If you have already found a minimum path length of say, 10.5, you can prune this subtree where you could not have before.

And without further ado, here’s the code.

$min = Float::MAX
def solve(node, remain, unseen, expect = 0, time = 0)
    return if expect + unseen * time >= $min
    return ($min = expect) if remain.empty?
    remain.each do |n|
        next_time  = time + $weights[node][n]
        solve n,
              remain - [n],
              unseen - $probs[n],
              expect + next_time * $probs[n],
              next_time
    end
    $min
end

This works fairly well. We can do that 17 node graph in 30 seconds now. I don’t have a good estimate of the complexity improvement here, since you can generate corner cases that can prevent any pruning from happening.

Optimizations!

Ruby is slow. At least, Cygwin’s default Ruby 1.8.7 interpreter is slow. I decided to reimplement the whole deal in C++ and see what kind of speedups I could achieve. Here is my initial implementation in C++:

double solve(int node, set<int> &remain, double unseen,
        double expect = 0.0, double time = 0.0) {
    static double min = numeric_limits<double>::max();
    if (expect + unseen * time >= min)
        return -1;
    if (remain.size() == 0) {
        min = expect;
        return -1;
    }
    for (set<int>::iterator n = remain.begin(); n != remain.end(); n++) {
        int next = *n;
        double next_time = time + weights[node][next];
        set<int> next_remain = remain;
        next_remain.erase(next);
        solve(next, next_remain, unseen - probs[next],
            expect + next_time * probs[next], next_time);
    }
    return min;
}

However, the speedup here was only a factor of two or so. Where are the bottlenecks? Turns out, a big one in both Ruby and C++ is the constant recreation of the remainder set at

set<int> next_remain = remain;

It’s much faster to just do:

set<int> next_remain = remain;
for (set<int>::iterator n = remain.begin(); n != remain.end(); n++) {
    int next = *n;
    double next_time = time + weights[node][next];
    next_remain.erase(next);
    solve(next, next_remain, unseen - probs[next],
        expect + next_time * probs[next], next_time);
    next_remain.insert(next);
}

This way, you only do one set copy per recursion, and then just pass around to all of your children. “But Vincent,” you say, “why even bother recreating the set at each recursion? Can’t you just pass alone one set of remaining nodes and add and remove from it?” Sure, but its very annoying to not invalidate iterators to a set that is constantly shrinking and growing through iteration and recursion. Here’s what I came up with:

double solve(int node, vector<node_entry> &remain, double unseen,
        double expect = 0.0, double time = 0.0) {
    static double min = numeric_limits<double>::max();
    if (expect + unseen * time >= min)
        return -1;

    bool empty = true;
    for (vector<node_entry>::iterator n = remain.begin(); n != remain.end(); n++) {
        if (!n->active)
            continue;
        empty = false;
        int next = n->index;
        double next_time = time + weights[node][next];
        n->active = false;
        solve(next, remain, unseen - probs[next],
            expect + next_time * probs[next], next_time);
        n->active = true;
    }
    if (empty) min = expect;
    return min;
}

This way we just store whether an element is active or not in the data model itself, instead of representing that information with presence in a set. This is nice because adding/removing from a set, while O(log(n)) fast, ain’t no O(1). This gets us (for our 17 node graph):

$ make && time ./sophie in_sophie5.txt
make: `sophie' is up to date.
38.20

real    0m0.220s
user    0m0.156s
sys     0m0.030s

Hooray!

Final Thoughts

I didn’t talk about any of the edge cases, but you need to check for if it’s actually possible to be sure to find sophie. In particular, if there are nodes that aren’t reachable from the first node that sophie has a chance of being in then you need to fail.

Dear submitter,

Thank you for your submission of a puzzle solution to Facebook! After running your solution to sophie (received on March 5, 2011, 8:14 pm), I have determined it to be correct. Your solution ran for 25.118 ms on its longest test case.

Also, full source and some plagiarized test cases are all on github.

Facebook Puzzle: peaktraffic

Last week I was working on Facebook’s peak traffic puzzle, which was a pretty entertaining and informative exercise. The idea is basically to parse a log file and generate an undirected graph that represents mutual friendships between Facebook users. Then you are to find every group of friends where each friend in the group is friends with every other person in the group.

I wrote a naive implementation at first that looked something like

# $seen is a hash to menoize previously seen sets
# $sparse is a hash of usernames to a list of neighboring usernames
# $set is the list of output clusters

$seen = {}
def subgraph(set, adj)
    hash = (set + adj).sort
    return if $seen[hash]
    $sets.push set.sort.join(", ") if adj.empty? and set.size > 2
    adj.each {|node| subgraph(set + [node], $sparse[node] & adj)}
    $seen[hash] = true
end

$sparse.keys.each do |vertex|
    subgraph([vertex], $sparse[vertex])
end

This appeared to work pretty well on my tests, but continued to fail on Facebook puzzle submission for some reason I couldn’t track down at the time. In my frustration, I went to the internet. Turns out, this problem is actually known as list maximal cliques, a category of clique problem. The prior link has more information about cliques, which are subgraphs that have the friend property desired by this problem. Additionally, there is a known “pretty good” strategy for solving this problem, the Bron Kerbosch algorithm .

I went all out and applied a couple optimizations, namely (warning: abstract pdfs) pivoting and degeneracy ordering, which are two techniques used to cut down on the number of recursive calls. At this point my code looked something like

def generate_degeneracy_ordering
    d = []  #degree buckets
    dw = {} #degree for each vertex
    $sparse.each_pair do |vertex, neighbors|
        deg = neighbors.size
        d[deg] ||= []
        d[deg].push vertex
        dw[vertex] = deg
    end
    d.each_index {|i| d[i] ||= []}
    $sparse.size.times do
        vertex = d.find {|x| !x.empty?}.pop
        $degen.push vertex
        for neighbor in $sparse[vertex]
            if d[dw[neighbor]].delete neighbor
                dw[neighbor] -= 1
                d[dw[neighbor]].push neighbor
            end
        end
    end
end

def bron_kerbosch(set, points, exclude, pivot_neighbors=nil)
    if points.empty?
        $sets.push set.sort.join(', ') if set.size > 2 and exclude.empty?
        return
    end

    pivot_neighbors ||= (exclude.empty? or $sparse[points.last].size > $sparse[exclude.last].size) ?
        $sparse[points.last] : $sparse[exclude.last]

    points.each_with_index do |vertex, i|
        next if pivot_neighbors.include? vertex
        points[i] = nil
        bron_kerbosch(set + [vertex],
                      points & $sparse[vertex],
                      exclude & $sparse[vertex])
        exclude.push vertex
    end
end

exit unless ARGV.size == 1
ingest(ARGV[0])

generate_degeneracy_ordering
before = []
after = $degen[1..$degen.size-1]
$degen.each do |vertex|
    intersect = after & $sparse[vertex]
    bron_kerbosch([vertex],
                  intersect,
                  before & $sparse[vertex],
                  $sparse[intersect.last]) #last elements in $degen have highest degrees
    before.push vertex
    after.shift
end

Which let me parse a 120MB input file in 1 minute flat. However, for some reason this was slower than my naive solution, which could do it, ironically, in 48s. It did pass the Facebook puzzle checker, though, so at least I had that. If anyone has any insight into why my naive solution appears to outperform bron_kerbosch I would greatly appreciate it, as it does not seem correct. My guess is slow implementations in Ruby of Array “set-like” operators (&, +).

final thoughts

I spent 2-3 days stuck on bugs that amounted to

  • parsing input off by one whitespace
  • output missing a final newline
  • mysterious bug that turned out to be RUBY 1.8.6 NOT HAVING MAX_BY AND ME TESTING IN 1.8.7

Facebook absolutely does not provide you any clues as to what went wrong besides there being a syntax error in your build. In this case it falls on you to both duplicate their runtime environment and generate workable tests. But finally seeing the confirmation email is definitely worth it. Here are the results after finally getting my naive solution running.

Dear submitter,

Thank you for your submission of a puzzle solution to Facebook! After running your solution to peaktraffic (received on February 28, 2011, 4:07 pm), I have determined it to be correct. Your solution ran for 1162.186 ms on its longest test case.

Here are my two solutions to this problem. The codepath that executes the Bron Kerbosch algorithm is not active but fairly obvious. They both work.

Cheap Toto Pagination

Toto is a great tiny blogging platform for Ruby/Rack. However, it doesn’t expose much in the way of a MVC structure and it can be just annoying enough when you want to add some feature that isn’t there. In this case, I wanted to add some simple older/newer pagination to the front page. To my chagrin, I couldn’t find a way to pass variables to a Toto page without using the GET variable syntax (i.e. ?page=1) and I still wanted to hold onto the rails RESTful paradigm of /page/1, so I monkey patched the Toto::Site dispatcher, like so:

# in config.ru
class Toto::Site
    alias_method :old_go, :go

    def go route, env = {}, type = :html
        if not route.first =~ /\d{4}/ and route.size == 2 and route.last =~ /(\d+)/
            @config[:id] = route.last.to_i
            route.pop
        end
        ret = old_go route, env, type
        @config.delete :id
        ret
    end
end
...
# and in the config initializer block:
set :root, "page"                                           # page to load on /

You can see that we intercept routes that look like name/1234 and pass the numeric portion of the route in @config[:id], and then clear @config[:id] (because @config is persistent). This is pretty hacky and only really acceptable in the context of Heroku caching everything.

and in templates/pages/page.rhtml…

<%
    page = @config[:id]
    per_page = @config[:articles_per_page]
    page = 1 if (page.nil? or (page-1) * per_page > @articles.length) or page < 1
    page_results = @articles[(page-1) * per_page .. page * per_page - 1]
    prev_page = page > 1 ? page - 1 : nil
    next_page = @articles.length > page * per_page ? page + 1 : nil
%>
...
<p id="footer">
<% if prev_page %>
    <a href="/page/<%=prev_page%>">&laquo; newer</a>
    <% if next_page  %>|<% end %>
<% end %>
<% if next_page %>
    <a href="/page/<%=next_page%>">older &raquo;</a>
<% end %>
</p>

Also, syntax highlighting brought to you by highlight.js.

Interactive Google Doodle

Google’s take on an interactive ball logo was interesting enough to me that I felt that I needed to write something at least as fun. My take is backed by HTML5, so you will need a suitably compliant browser to view it.

Features

  • Mass based ball collisions
  • Mouse input to scatter balls
  • Balls “anchored” to positions with dampened springs
  • Typing creates additional interactable balls
  • Searching cues exit animation

Technical information

The code for this demo (as well this entire blog) is on github. Here is the primary javascript engine.

In google’s original doodle, the balls would react with changes in size and velocity to the presence of the mouse. They would not, however, collide with anything. I thought this was a shame, and set out to make a HTML5 canvas app that could handle collisions between a large number of objects.

A naive initial implementation worked well enough. It looked like this (pseudocode is rubyish):

for i in 0 to circles.length - 2
    for j in i + 1 to circles.length - 1
        handleCollision(circles[i], circles[j]);
    end
end

which served me well enough on Canary on my admittedly beefy desktop. It wasn’t until I tried to run the same on Firefox 3.x on my work laptop that I realized I probably wasn’t performant on a wide range of configurations.

The thing to do, then, was to set up a 2D grid covering the entire canvas. Each cell is about the diameter of a circle, and knows about any circles within itself, as well as up to four of its neighbors. Then, collision code becomes something like

for circle in circles
    for potentialCollider in grid.getPotentialColliders(circle.pos)
        handleCollision(circle, potentialCollider);
    end
end

def grid.getPotentialColliders(pos)
    cell = getCell(pos)
    ret = []
    for neighbor in cell.neighbors
        ret = ret.concat(neighbor.objects)
    end
    ret
end

which takes the runtime complexity down from n^2 to about n. Neat!

Probably the hardest part was making circles appear for the query text. Understanding whether the user added characters, deleted, replaced, pasted, etc. is kind of difficult. I honestly hacked together a solution that works in 95% of use cases, but the best solution would to be to implement a full fledged dynamic programming algorithm to find the minimum edit distance.

And finally, a big thanks to Rob Hawke’s recreation of the google ball logo for both reference and initial source data points.