Assignment Problem Max Flow Filters

Here’s a problem: Your business assigns contractors to fulfill contracts. You look through your rosters and decide which contractors are available for a one month engagement and you look through your available contracts to see which of them are for one month long tasks. Given that you know how effectively each contractor can fulfill each contract, how do you assign contractors to maximize the overall effectiveness for that month?

This is an example of the assignment problem, and the problem can be solved with the classical Hungarian algorithm.

The Hungarian algorithm (also known as the Kuhn-Munkres algorithm) is a polynomial time algorithm that maximizes the weight matching in a weighted bipartite graph. Here, the contractors and the contracts can be modeled as a bipartite graph, with their effectiveness as the weights of the edges between the contractor and the contract nodes.

In this article, you will learn about an implementation of the Hungarian algorithm that uses the Edmonds-Karp algorithm to solve the linear assignment problem. You will also learn how the Edmonds-Karp algorithm is a slight modification of the Ford-Fulkerson method and how this modification is important.

The Maximum Flow Problem

The maximum flow problem itself can be described informally as the problem of moving some fluid or gas through a network of pipes from a single source to a single terminal. This is done with an assumption that the pressure in the network is sufficient to ensure that the fluid or gas cannot linger in any length of pipe or pipe fitting (those places where different lengths of pipe meet).

By making certain changes to the graph, the assignment problem can be turned into a maximum flow problem.

Preliminaries

The ideas needed to solve these problems arise in many mathematical and engineering disciplines, often similar concepts are known by different names and expressed in different ways (e.g., adjacency matrices and adjacency lists). Since these ideas are quite esoteric, choices are made regarding how generally these concepts will be defined for any given setting.

This article will not assume any prior knowledge beyond a little introductory set theory.

The implementations in this post represent the problems as directed graphs (digraph).

DiGraphs

A digraph has two attributes, setOfNodes and setOfArcs. Both of these attributes are sets (unordered collections). In the code blocks on this post, I’m actually using Python’s frozenset, but that detail isn’t particularly important.

(Note: This, and all other code in this article, are written in Python 3.6.)

Nodes

A node is composed of two attributes:

  • : This represents any data object.

Arcs

An arc is composed of three attributes:

  • : This is a node, as defined above.

  • : This is a node, as defined above.

  • : This represents any data object.

The set of arcs in the digraph represents a binary relation on the nodes in the digraph. The existence of arc implies that a relationship exists between and .

In a directed graph (as opposed to an undirected graph), the existence of a relationship between and does not imply that a similar relationship between and exists.

This is because in an undirected graph, the relationship being expressed is not necessarily symmetric.

DiGraphs

Nodes and arcs can be used to define a digraph, but for convenience, in the algorithms below, a digraph will be represented using as a dictionary.

Here’s a method that can convert the graph representation above into a dictionary representation similar to this one:

And here’s another one that converts it into a dictionary of dictionaries, another operation that will be useful:

When the article talks about a digraph as represented by a dictionary, it will mention to refer to it.

Sometimes it’s helpful to fetch a node from a digraph by it up through its (unique identifier):

In defining graphs, people sometimes use the terms node and vertex to refer to the same concept; the same is true of the terms arc and edge.

Two popular graph representations in Python are this one which uses dictionaries and another which uses objects to represent graphs. The representation in this post is somewhere in between these two commonly used representations.

This is my digraph representation. There are many like it, but this one is mine.

Walks and Paths

Let be a finite sequence (ordered collection) of arcs in a digraph such that if is any arc in except for the last, and follows in the sequence, then there must be a node in such that .

Starting from the first arc in , and continuing until the last arc in , collect (in the order encountered) all nodes as defined above between each two consecutive arcs in . Label the ordered collection of nodes collected during this operation .

  • If any node appears more than once in the sequence then call a Walk on digraph.

  • Otherwise, call a path from to on digraph.

Source Node

Call node a source node in digraph if is in and contains no arc such that .

Terminal Node

Call node a terminal node in digraph if is in and contains no arc such that .

Cuts and s-t Cuts

A cut of a connecteddigraph is a subset of arcs from which partitions the set of nodes in . is connected if every node in and has at least one arc in such that either or , but .

The definition above refers to a subset of arcs, but it can also define a partition of the nodes of .

For the functions and , is a node in set G.setOfNodes of digraph, and is a cut of :

Let be a cut of digraph.

is a cut of digraph if: is called an x-y cut if . When the node in a x-y cut is a source node and node in the x-y cut is a terminal node, then this cut is called a s-t cut.

Flow Networks

You can use a digraph to represent a flow network.

Assign each node, where is in an that is a :

Assign each arc, where is in and that is a .

and are positivereal numbers.

and are also positive real numbers.

For every node node in :

Digraph now represents a flow network.

The flow of refers to the for all arcs in .

Feasible Flows

Let digraph represent a flow network.

The flow network represented by has feasible flows if:

  1. For every node in except for source nodes and terminal nodes : .

  2. For every arc in : .

Condition 1 is called a conservation constraint.

Condition 2 is called a capacity constraint.

Cut Capacity

The cut capacity of an s-t cut with source node and terminal node of a flow network represented by a digraph is:

Minimum Capacity Cut

Let be an s-t cut of a flow network represented by a digraph.

is the minimum capacity cut of the flow network represented by if there is no other s-t cut in this flow network such that:

Stripping the Flows Away

I would like to refer to the digraph that would be the result of taking a digraph and stripping away all the flow data from all the nodes in and also all the arcs in .

Maximum Flow Problem

A flow network represented as a digraph, a source node in and a terminal node in , can represent a maximum flow problem if:

Label this representation:

Where , , and is an identifier for the problem instance.

Maximum Flow Solution

Let represent a maximum flow problem. The solution to can be represented by a flow network represented as a digraph.

Digraph is a feasible solution to the maximum flow problem on input if:

  1. .

  2. is a flow network and has feasible flows.

If in addition to 1 and 2:

  1. There can be no other flow network represented by digraph such that and .

Then is also an optimal solution to .

In other words a feasible maximum flow solution can be represented by a digraph, which:

  1. Is identical to digraph of the corresponding maximum flow problem with the exception that the , and the of any of the nodes and arcs may be different.

  2. Represents a flow network that has feasible flows.

And, it can represent an optimal maximum flow solution if additionally:

  1. The for the node corresponding to the terminal node in the maximum flow problem is as large as possible (when conditions 1 and 2 are still satisfied).

If digraph represents a feasible maximum flow solution : this follows from the max flow, min cut theorem (discussed below). Informally, since is assumed to have feasible flows this means that flow can neither be ‘created’ (except at source node) nor ‘destroyed’ (except at terminal node) while crossing any (other) node (conservation constraints).

Since a maximum flow problem contains only a single source node and a single terminal node, all flow ‘created’ at must be ‘destroyed’ at or the flow network does not have feasible flows (the conservation constraint would have been violated).

Let digraph represent a feasible maximum flow solution; the value above is called the s-t Flow Value of .

Let:

This means that is a successor state of , which just means that is exacly like with the exception that the values of for arcs in may be different than for arcs in .

Here’s a visualization of a along with its associated . Each arc in the image has a label, these labels are , each node in the image has a label, and these labels are .

s-t Cut Flow

Let represent a and let represent a cut of . The cut flow of is defined:

s-t cut flow is the sum of flows from the partition containing the source node to the partition containing the terminal node minus the sum of flows from the partition containing the terminal node to the partition containing the source node.

Max Flow, Min Cut

Let represent a maximum flow problem and let the solution to be represented by a flow network represented as digraph.

Let be the minimum capacity cut of the flow network represented by .

Because in the maximum flow problem flow originates in only a single source node and terminates at a single terminal node and, because of the capacity constraints and the conservation constraints, we know that the all of the flow entering must cross any s-t cut, in particular it must cross . This means:

Solving the Maximum Flow Problem

The basic idea for solving a maximum flow problem is to start with a maximum flow solution represented by digraph. Such a starting point can be . The task is then to use and by some greedy modification of the values of some arcs in to produce another maximum flow solution represented by digraph such that cannot still represent a flow network with feasible flows and . As long as this process continues, the quality () of the most recently encountered maximum flow solution () is better than any other maximum flow solution that has been found. If the process reaches a point that it knows that no other improvement is possible, the process can terminate and it will return the optimalmaximum flow solution.

The description above is general and skips many proofs such as whether such a process is possible or how long it may take, I’ll give a few more details and the algorithm.

The Max Flow, Min Cut Theorem

From the book Flows in Networks by Ford and Fulkerson, the statement of the max flow, min cut theorem (Theorem 5.1) is:

For any network, the maximal flow value from to is equal to the minimum cut capacity of all cuts separating and .

Using the definitions in this post, that translates to:

The solution to a represented by a flow network represented as digraph is optimal if:

I like this proof of the theorem and Wikipedia has another one.

The max flow, min cut theorem is used to prove the correctness and completeness of the Ford-Fulkerson method.

I’ll also give a proof of the theorem in the section after augmenting paths.

The Ford-Fulkerson Method and the Edmonds-Karp Algorithm

CLRS defines the Ford-Fulkerson method like so (section 26.2):

Residual Graph

The Residual Graph of a flow network represented as the digraph can be represented as a digraph:

  • returns the sum of for all arcs in the subset of where arc is in the subset if and .

  • returns the sum of for all arcs in the subset of where arc is in the subset if and .

Briefly, the residual graph represents certain actions which can be performed on the digraph.

Each pair of nodes in of the flow network represented by digraph can generate 0, 1, or 2 arcs in the residual graph of .

  1. The pair does not generate any arcs in if there is no arc in such that and .

  2. The pair generates the arc in where represents an arc labeled a push flow arc from to if .

  3. The pair generates the arc in where represents an arc labeled a pull flow arc from to if .

  • Each push flow arc in represents the action of adding a total of flow to arcs in the subset of where arc is in the subset if and .

  • Each pull flow arc in represents the action of subtracting a total of flow to arcs in the subset of where arc is in the subset if and .

Performing an individual push or pull action from on the applicable arcs in might generate a flow network without feasible flows because the capacity constraints or the conservation constraints might be violated in the generated flow network.

Here’s a visualization of the residual graph of the previous example visualization of a maximum flow solution, in the visualization each arc represents .

Augmenting Path

Let be a max flow problem, and let be the residual graph of .

An augmenting path for is any path from to .

It turns out that an augmenting path can be applied to a max flow solution represented by digraph generating another max flow solution represented by digraph where if is not optimal.

Here’s how:

In the above, is some tolerance value for rounding the flow values in the network. This is to avoid cascading imprecision of floating point calculations. So, for example, I used to mean round to 10 significant digits.

Let , then represents a feasible max flow solution for . For the statement to be true, the flow network represented by must have feasible flows (not violate the capacity constraint or the conservation constraint.

Here’s why: In the method above, each node added to the new flow network represented by digraph is either an exact copy of a node from digraph or a node which has had the same number added to its as its . This means that the conservation constraint is satisfied in as long as it was satisfied in . The conservation constraint is satisfied because we explicitly check that any new arc in the network has ; thus, as long as the arcs from the set which were copied unmodified into do not violate the capacity constraint, then does not violate the capacity constraint.

It’s also true that if is not optimal.

Here’s why: For an augmenting path to exist in the digraph representation of the residual graph of a max flow problem then the last arc on must be a ‘push’ arc and it must have . An augmenting path is defined as one which terminates at the terminal node of the max flow problem for which it is an augmenting path. From the definition of the residual graph, it is clear that the last arc in an augmenting path on that residual graph must be a ‘push’ arc because any ‘pull’ arc in the augmenting path will have and from the definition of path. Additionally, from the definition of path, it is clear that the terminal node is only modified once by the method. Thus modifies exactly once and it increases the value of because the last arc in the must be the arc which causes the modification in during . From the definition of as it applies to ‘push’ arcs, the can only be increased, not decreased.

Some Proofs from Sedgewick and Wayne

The book Algorithms, fourth edition by Robert Sedgewich and Kevin Wayne has some wonderful and short proofs (pages 892-894) that will be useful. I’ll recreate them here, though I’ll use language fitting in with previous definitions. My labels for the proofs are the same as in the Sedgewick book.

Proposition E: For any digraph representing a feasible maximum flow solution to a maximum flow problem, for any .

Proof: Let . Proposition E holds for directly from the definition of s-t flow value. Suppose that there we wish to move some node from the s-partition () and into the t-partition , to do so we need to change , which could change and invalidate proposition E. However, let’s see how the value of will change as we make this change. node is at equilibrium meaning that the sum of flow into node is equal to the sum of flow out of it (this is necessary for to represent a feasible solution). Notice that all flow which is part of the entering node enters it from the s-partition (flow entering node from the t-partition either directly or indirectly would not have been counted in the value because it is heading the wrong direction based on the definition). Additionally, all flow exiting will eventually (directly or indirectly) flow into the terminal node (proved earlier). When we move node into the t-partition, all the flow entering from the s-partition must be added to the new value; however, all flow exiting must the be subtracted from the new value; the part of the flow heading directly into the t-partition is subtracted because this flow is now internal to the new t-partition and is not counted as . The part of the flow from heading into nodes in the s-partition must also be subtracted from : After is moved into the t-partition, these flows will be directed from the t-partition and into the s-partition and so must not be accounted for in the , since these flows are removed the inflow into the s-partition and must be reduced by the sum of these flows, and the outflow from the s-partition into the t-partition (where all flows from s-t must end up) must be reduced by the same amount. As node was at equilibrium prior to the process, the update will have added the same value to the new value as it subtracted thus leaving proposition E true after the update. The validity of proposition E then follows from induction on the size of the t-partition.

Here are some example flow networks to help visualize the less obvious cases where proposition E holds; in the image, the red areas indicate the s-partition, the blue areas represent the t-partition, and the green arcs indicate an s-t cut. In the second image, flow between node A and node B increases while the flow into terminal node t doesn’t change.:

Corollary: No s-t cut flow value can exceed the capacity of any s-t cut.

Proposition F. (max flow, min cut theorem): Let be an s-t flow. The following 3 conditions are equivalent:

  1. There exists an s-t cut whose capacity equals the value of the flow .

  2. is a max flow.

  3. There is no augmenting path with respect to .

Condition 1 implies condition 2 by the corollary. Condition 2 implies condition 3 because the existence of an augmenting path implies the existence of a flow with larger values, contradicting the maximality of . Condition 3 implies condition 1: Let be the set of all nodes that can be reached from with an augmenting path in the residual graph. Let be the remaining arcs, then must be in (by our assumption). The arcs crossing from to then form an s-t cut which contains only arcs where either or . If this were otherwise then the nodes connected by an arc with remaining residual capacity to would be in the set since there would then be an augmenting path from to such a node. The flow across the s-t cut is equal to the s-t cut’s capacity (since arcs from to have flow equal to capacity) and also to the value of the s-t flow (by proposition E).

This statement of the max flow, min cut theorem implies the earlier statement from Flows in Networks.

Corollary (integrality property): When capacities are integers, there exists an integer-valued max flow, and the Ford-Fulkerson algorithm finds it.

Proof: Each augmenting path increases the flow by a positive integer, the minimum of the unused capacities in the ‘push’ arcs and the flows in the ‘pull’ arcs, all of which are always positive integers.

This justifies the Ford-Fulkerson method description from CLRS. The method is to keep finding augmenting paths and applying to the latest coming up with better solutions, until no more augmenting path meaning that the latest maximum flow solution is optimal.

From Ford-Fulkerson to Edmonds-Karp

The remaining questions regarding solving maximum flow problems are:

  1. How should augmenting paths be constructed?

  2. Will the method terminate if we use real numbers and not integers?

  3. How long will it take to terminate (if it does)?

The Edmonds-Karp algorithm specifies that each augmenting path is constructed by a breadth first search (BFS) of the residual graph; it turns out that this decision of point 1 above will also force the algorithm to terminate (point 2) and allows the asymptotic time and space complexity to be determined.

First, here’s a BFS implementation:

I used a deque from the python collections module.

To answer question 2 above, I’ll paraphrase another proof from Sedgewick and Wayne: Proposition G. The number of augmenting paths needed in the Edmonds-Karp algorithm with nodes and arcs is at most . Proof: Every augmenting path has a bottleneckarc- an arc that is deleted from the residual graph because it corresponds either to a ‘push’ arc that becomes filled to capacity or a ‘pull’ arc through which the flow becomes 0. Each time an arc becomes a bottleneck arc, the length of any augmenting path through it must increase by a factor of 2. This is because each node in a path may appear only once or not at all (from the definition of path) since the paths are being explored from shortest path to longest that means that at least one more node must be visited by the next path that goes through the particular bottleneck node that means an additional 2 arcs on the path before we arrive at the node. Since the augmenting path is of length at most each arc can be on at most augmenting paths, and the total number of augmenting paths is at most .

The Edmonds-Karp algorithm executes in . If at most paths will be explored during the algorithm and exploring each path with BFS is then the most significant term of the product and hence the asymptotic complexity is .

Let be a .

The version above is inefficient and has worse complexity than since it constructs a new maximum flow solution and new a residual graph each time (rather than modifying existing digraphs as the algorithm advances). To get to a true solution the algorithm must maintain both the digraph representing the maximum flow problem state and its associated residual graph. So the algorithm must avoid iterating over arcs and nodes unnecessarily and update their values and associated values in the residual graph only as necessary.

To write a faster Edmonds Karp algorithm, I rewrote several pieces of code from the above. I hope that going through the code which generated a new digraph was helpful in understanding what’s going on. In the fast algorithm, I use some new tricks and Python data structures that I don’t want to go over in detail. I will say that and are now treated as strings and uids to nodes. For this code, let be a

Here’s a visualization of how this algorithm solves the example flow network from above. The visualization shows the steps as they are reflected in the digraph representing the most up-to-date flow network and as they are reflected in the residual graph of that flow network. Augmenting paths in the residual graph are shown as red paths, and the digraph representing the problem the set of nodes and arcs affected by a given augmenting path is highlighted in green. In each case, I’ll highlight the parts of the graph that will be changed (in red or green) and then show the graph after the changes (just in black).

Here’s another visualization of how this algorithm solving a different example flow network. Notice that this one uses real numbers and contains multiple arcs with the same and values.

**Also notice that because Arcs with a ‘pull’ ResidualDatum may be part of the Augmenting Path, the nodes affected in the DiGraph of the Flown Network _may not be on a path in .

Bipartite Graphs

Suppose we have a digraph, is bipartite if it’s possible to partition the nodes in into two sets ( and ) such that for any arc in it cannot be true that in and in . It also cannot be true that in and in .

In other words is bipartite if it can be partitioned into two sets of nodes such that every arc must connect a node in one set to a node in the other set.

Testing Bipartite

Suppose we have a digraph, we want to test if it is bipartite. We can do this in by greedy coloring the graph into two colors.

First, we need to generate a new digraph. This graph will have will have the same set of nodes as , but it will have more arcs than . Every arc in will create 2 arcs in ; the first arc will be identical to , and the second arc reverses the director of ( ).

Matchings and Maximum Matchings

Suppose we have a digraph and is a subset of arcs from . is a matching if for any two arcs and in : . In other words, no two arcs in a matching share a node.

Matching, is a maximum matching if there is no other matching in such that . In other words, is a maximum matching if it is the largest set of arcs from that still satisfies the definition of matching (the addition of any arc not already in the matching will break the matching definition).

A maximum matching is a perfect matching if every for node in there exists an arc in where .

Maximum Bipartite Matching

A maximum bipartite matching is a maximum matching on a digraph which is bipartite.

Given that is bipartite, the problem of finding a maximum bipartite matching can be transformed into a maximum flow problem solvable with the Edmonds-Karp algorithm and then the maximum bipartite matching can be recovered from the solution to the maximum flow problem.

Let be a bipartition of .

To do this, I need to generate a new digraph () with some new nodes () and some new arcs (). contains all the nodes in and two more nodess, (a source node) and (a terminal node).

will contain one arc for each . If an arc is in and is in and is in then include in (adding a ).

If is in and is in , then include in .

The definition of a bipartite graph ensures that no arc connects any nodes where both nodes are in the same partition. also contains an arc from node to each node in . Finally, contains an arc each node in to node. for all in .

First partition the nodes in the two disjoint sets ( and ) such that no arc in is directed from one set to the same set (this partition is possible because is bipartite). Next, add all arcs in which are directed from to into . Then create a single source node and a single terminal node and create some more arcs

Then, construct a .

Minimal Node Cover

A node cover in a digraph is a set of nodes () from such that for any arc of this must be true: .

A minimal node cover is the smallest possible set of nodes in the graph that is still a node cover. König’s theorem states that in a bipartite graph, the size of the maximum matching on that graph is equal to the size of the minimal node cover, and it suggests how the node cover can recovered from a maximum matching:

Suppose we have the bipartition and the maximum matching. Define a new digraph, , the arcs in are the union of two sets.

The first set is arcs in , with the change that if and then and are swapped in the created arc give such arcs a attribute to indicate that they were derived from arcs in a matching.

The second set is arcs NOT in , with the change that if and then and are swapped in the created arc (give such arcs a attribute).

Next, run a depth first search (DFS) starting from each node in which is neither nor for any arc in . During the DFS, some nodes are visited and some are not (store this information in a field). The minimum node cover is the union of the nodes and the nodes.

This can be shown to lead from a maximum matching to a minimal node cover by a proof by contradiction, take some arc that was supposedly not covered and consider all four cases regarding whether and belong (whether as or ) to any arc in matching. Each case leads to a contradiction due to the order that DFS visits nodes and the fact that is a maximum matching.

Suppose we have a function to execute these steps and return the set of nodes comprising the minimal node cover when given the digraph, and the maximum matching:

The Linear Assignment Problem

The linear assignment problem consists of finding a maximum weight matching in a weighted bipartite graph.

Problems like the one at the very start of this post can be expressed as a linear assignment problem. Given a set of workers, a set of tasks, and a function indicating the profitability of an assignment of one worker to one task, we want to maximize the sum of all assignments that we make; this is a linear assignment problem.

Assume that the number of tasks and workers are equal, though I will show that this assumption is easy to remove. In the implementation, I represent arc weights with an attribute for an arc.

Kuhn-Munkres Algorithm

The Kuhn-Munkres Algorithm solves the linear assignment problem. A good implementation can take time, (where is the number of nodes in the digraph representing the problem). An implementation that is easier to explain takes (for a version which regenerates DiGraphs) and for (for a version which maintains DiGraphs). This is similar to the two different implementations of the Edmonds-Karp algorithm.

For this description, I’m only working with complete bipartite graphs (those where half the nodes are in one part of the bipartition and the other half in the second part). In the worker, task motivation this means that there are as many workers as tasks.

This seems like a significant condition (what if these sets are not equal!) but it is easy to fix this issue; I talk about how to do that in the last section.

The version of the algorithm described here uses the useful concept of zero weight arcs. Unfortunately, this concept only makes sense when we are solving a minimization (if rather than maximizing the profits of our worker-task assignments we were instead minimizing the cost of such assignments).

Fortunately, it is easy to turn a maximum linear assignment problem into a minimum linear assignment problem by setting each the arc weights to where . The solution to the original maximizing problem will be identical to the solution minimizing problem after the arc weights are changed. So for the remainder, assume that we make this change.

The Kuhn-Munkres algorithm solves minimum weight matching in a weighted bipartite graph by a sequence of maximum matchings in unweighted bipartite graphs. If a we find a perfect matching on the digraph representation of the linear assignment problem, and if the weight of every arc in the matching is zero, then we have found the minimum weight matching since this matching suggests that all nodes in the digraph have been matched by an arc with the lowest possible cost (no cost can be lower than 0, based on prior definitions).

No other arcs can be added to the matching (because all nodes are already matched) and no arcs should be removed from the matching because any possible replacement arc will have at least as great a weight value.

If we find a maximum matching of the subgraph of which contains only zero weight arcs, and it is not a perfect matching, we don’t have a full solution (since the matching is not perfect). However, we can produce a new digraph by changing the weights of arcs in in a way that new 0-weight arcs appear and the optimal solution of is the same as the optimal solution of . Since we guarantee that at least one zero weight arc is produced at each iteration, we guarantee that we will arrive at a perfect matching in no more than |G.setOfNodes|^{2}=N^{2} such iterations.

Suppose that in bipartition, contains nodes representing workers, and represents nodes representing tasks.

The algorithm starts by generating a new digraph. . Some arcs in are generated from nodes in . Each such node

Optimal-Flow Minimum-Cost Correspondence Assignment in Particle Flow Tracking

Alexandre Matov,1Marcus M. Edvall,2Ge Yang,1,3 and Gaudenz Danuser1,*

1 Department of Cell Biology, The Scripps Research Institute, La Jolla, CA 92037

2 Tomlab Optimization Inc., Pullman, WA 99163

3 Lane Center for Computational Biology & Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213

* Corresponding Author's current address: Harvard Medical School, 240 Longwood Ave, Boston, MA 02115, ude.dravrah.smh@resunaD_zneduaG, tel: +1 617 432 7941

Author information ►Copyright and License information ►

Comput Vis Image Underst. Author manuscript; available in PMC 2012 Apr 1.

Published in final edited form as:

PMCID: PMC3123713

NIHMSID: NIHMS266908

See other articles in PMC that cite the published article.

Abstract

A diversity of tracking problems exists in which cohorts of densely packed particles move in an organized fashion, however the stability of individual particles within the cohort is low. Moreover, the flows of cohorts can regionally overlap. Together, these conditions yield a complex tracking scenario that can not be addressed by optical flow techniques that assume piecewise coherent flows, or by multiparticle tracking techniques that suffer from the local ambiguity in particle assignment. Here, we propose a graph-based assignment of particles in three consecutive frames to recover from image sequences the instantaneous organized motion of groups of particles, i.e. flows. The algorithm makes no a priori assumptions on the fraction of particles participating in organized movement, as this number continuously alters with the evolution of the flow fields in time. Graph-based assignment methods generally maximize the number of acceptable particles assignments between consecutive frames and only then minimize the association cost. In dense and unstable particle flow fields this approach produces many false positives. The here proposed approach avoids this via solution of a multi-objective optimization problem in which the number of assignments is maximized while their total association cost is minimized at the same time. The method is validated on standard benchmark data for particle tracking. In addition, we demonstrate its application to live cell microscopy where several large molecular populations with different behaviors are tracked.

Index Terms: Particle tracking, vector field, multi-directional flows, graph algorithms, multi-objective optimization

1 Introduction

Computer vision algorithms for tracking particle flow have applications in fields as diverse as machine and stereo vision [1, 2], perceptual grouping [3], video surveillance [4, 5], meteorology [6], and astronomy. Recent reviews of particle tracking methods can be found in [7]. Experimental fluid dynamics is another field with extensive tracking applications [8]. Here, particle-based analyses of flow fields are often complemented with texture-based approaches that do not rely on image segmentation [9], yet assume piecewise coherent flow conditions [10]. Less known by the computer vision community, particle tracking is also a very prominent field in biophysics, where the goal is to follow the motion of single molecules and molecular complexes [11-13]. It is in this context that we have begun to develop particle tracking methods.

Particle-based tracking requires the formulation of a motion model that defines an explicit or implicit cost of particle correspondences between frames and an assignment process. Multiple-hypothesis tracking (MHT) computes the most probable particle path by achieving globally optimal assignments both in space and time [14, 15]. However, this approach leads to a combinatorial explosion in cases where many particles move in close proximity. Greedy optimization schemes have been proposed as alternatives. They reduce the computational cost by compromising temporal globality [16-19]. In live cell microscopy particles generally originate from fluorescently labeled molecules incorporated in sub-cellular structures. They have low stability and move in very dense, complex motion patterns, rendering the application of MHT and most of its approximations impossible. Thus, we sought to develop a method that captures the instantaneous state of particle flows from a minimal number of consecutive frames. We require particles to be present for a minimum of three time points and assume that between three time points they move at constant speed and preserve directionality. This permits definition of a local motion model similar to those in [16, 20]. Additionally, very often sub-populations of particles undergo organized movements, while others perform a random walk. The relative size of the populations and the number of sub-populations with distinct directions of organized motion are not known a priori. To extract these populations, we complement the local motion model with a motion model that accounts for regional coherence among particle movements.

Like other particle tracking methods [16, 21-24], we employ graph-based approaches to assign corresponding particles between frames. In general, these methods maximize the number of assignments and only then minimize the assignment cost. However, when particles are dense and unstable, maximizing the number of assignments tends to generate a large number of false positives. The risk of over-linking is further increased in data with a substantial portion of particles present for only one frame, and where multiple particle flow fields interpenetrate, both leading to high particle clutter. Under such conditions it is advantageous to identify a subset of particles with high confidence correspondences while leaving out uncertain links. On the other hand, to capture the spatial details of the particle motion field the number of assignments should still be maximized. Balancing the rejection of false positive assignments without generating a high number of false negative assignments is one of the key challenges in real-world tracking problems. The main contribution of this paper consists of proposing a multi-objective optimization method that determines the trade-off between the two competing requirements to establish robust particle correspondences over short time windows and thus extracts on-the-fly information on the instantaneous organization of particle flows in complex scenes. We compare the performance of this approach to related work on standard benchmark examples for particle tracking. We then introduce measures for the complexity of a tracking problem and demonstrate the performance of the proposed optimal-flow minimum-cost particle assignment on significantly more difficult problems on a particle image velocimetry example with ground truth and on fluorescence microscopy sequences capturing multi-directional, unstable flows in the foreground of a significant background population of particles with unorganized movements.

2 Tracking Problem Statement

Consider a set of Nf particles Pf = {p1,…,pNf} detected in frame f. Our goal is to define a particle flow field in this frame with instantaneously smooth motion that is represented by a set of tracks connecting each one of a subset of particles in Pf to one and only one particle in the set Pf−1 of Nf−1 particles detected in frame f-l and to one and only particle in the set Pf+1 of Nf+1 particles detected in frame f+1. To identify these tracks we assemble a graph G(V,E) with a source s ∈.V, a sink rV, vertices VPV comprising the union of particles detected in the three consecutive frames, i.e. VP = Pf−1PfPf+1={p1,…,pN} with N = Nf−l + Nf + Nf+l, and vertices VTV comprising M candidate triplets VT ={t1,…tM} (Fig. 1a,b). Each candidate triplet ti consists of one particle in Pf−1, one particle in Pf, and one particle in Pf+1, thus defining a candidate particle track over three consecutive frames. The construction of candidate particle tracks requires that particle displacements between frames are less than a prior estimate of the maximal particle velocity times the time interval between frames (Fig. 1a; see also Section 3). The graph contains three types of edges. First, edges (s, ti) ∈ E between source and any one of the triplets. They have a capacity a(s, ti) = 3 and a cost c(s, ti). The cost is derived from the motion smoothness of the three-frame track associated with triplet ti. Second, edges (ti, ph) ∈ E connect triplets to the three contributing particles. Third, edges (ph, r) ∈ E connect particles to the sink. Flow conservation throughout the graph requires that and a(ti, ph) = a(ph, r) = 1. The cost values associated with the second and the third type of edges is null, i.e. c(ti, ph) = c(ph, r) = 0.

Fig. 1

Definition of a graph for the assignment of particles over three consecutive frames. (a) Definition of candidate matches of particles based on circular windowing of particle displacements; (b) Derivation of a graph from candidate matches in a). Numbers...

The first question is how many triplets ≤ M with mutually exclusive particles exist in the set {t1,…tM}. Given the graph G(V, E), answering this is equivalent to solving the maximum source-to-sink flow [25], which yields one or multiple subgraphs {G1,…,GK}. Each subgraph contains a subset of edges and vertices (i.e triplets and particles) carrying a maximum flow 3. The second question is then which of the subgraphs Ĝ, with Ĝ ∈ {G1,…,GK }, has overall minimal cost. This is equivalent to solving the minimum cost flow problem [25]. Under conditions of dense and unstable particle fields enforcing maximum flow tends to select many false positive triplets at the expense of a smaller set of true positive triplets. Whereas false positives could be eliminated posterior to the track identification by criteria of motion smoothness, this data cleaning would not allow the recovery of true positives that were missed in fulfilling the condition that triplet selection must be comprised of mutually exclusive particles.

To increase the robustness of particle track selection, we propose to relax the maximum-flow condition to find a flow 3M* with M* < triplets. We refer to this procedure as an optimal-flow minimum-cost algorithm. Specifically, the objective of the optimal-flow minimum-cost particle assignment is to identify a subgraph G* with a minimal overall cost carrying a reduced (less than the maximum) flow so that low confidence triplets are eliminated. Evidently, this problem has a trivial solution M* = 0. Thus, we introduce as the second objective the maximization of the source-to-sink flow carried by G*:

(1)

Of note, by formulating the multi-objective problem Eq. 1 the sequential procedure of the maximum-flow minimum-cost computation is replaced by a competition of selecting only high-confidence assignments against generating as many assignments as possible. The triplets in G* are then defining the particle tracks that represent the instantaneous particle flow in the three-frame sequence.

3 Construction of Particle Triplets

To build the triplets {t1,…, tM } we apply the following three steps:

  1. Build a set of candidate assignments between particles in frame f and frame f - 1 for which ‖x(f)−x(f−1)‖ ≤ r0 and a set of candidate assignments between particles in frame f and frame f + 1 for which ‖x(f+1)−x(f)‖ ≤ r0. The search radius r0 defines the largest particle displacement allowed between consecutive frames.

  2. Construct candidate triplets by pairing all candidate assignments from the first and from the second set that share the same particle in frame f. Unpaired candidate assignments are ignored in the graph construction.

  3. For each candidate triplet, compute a cost function that penalizes changes of track direction and speed.

3.1 Selection of the Search Radius

We introduce a search radius r0 to limit the search space for triplet construction. Values for r0 can be derived from prior knowledge of the maximum speed of particles. If this information is unavailable, we define the radius so that the number of candidate triplets is approximately one order of magnitude greater than the average number of particles per frame. Accordingly, a large majority of particles participates in more than one triplet. Increasing the search radius beyond this point has virtually no effect on the cost distribution of the triplets selected by a maximum-flow minimum-cost computation (Fig. 1c). At this point it is likely that all true three-frame tracks are included in the set of candidate triplets and the triplet selection is sufficiently constrained by the condition of mutual exclusivity. Any larger search radii generate additional candidate triplets that merely increase computational cost of the solution without contributing true positives.

3.2 Cost Computation

We apply a local and regional motion model to define the cost of a triplet. The cost function iteratively weighs the influences of the two models based on the posterior distribution of model variables that is generated by the selected candidate tracks (see below). The iterative updating of model weights is terminated when the population of selected triplets varies less than 5% between iterations. For most tracking problems with a few hundred particles 3 – 4 iterations are sufficient for convergence because stable statistics can be drawn from the large number of selected triplets.

3.2.1 Local Motion Model

The angle αi between the first and the second segment of each triplet and the difference between their respective lengths δi define the variables of the local motion model, analogous to the motion model proposed by Sethi and Jain [20] (Fig. 1d). In large particle fields both variables have a zero mean and are normally distributed. The latter requirement can be further relaxed to unskewed distributions. In our implementation deviations from these conditions are monitored during the iterations and tracking solutions are rejected in case these assumptions are not fulfilled. To initiate the tracking, values of the distribution variances, and , are defined by an initial assumption of the particle behavior. Subsequently, the values are updated based on the distributions of the selected triplets, see below.

3.2.2 Model of Regional Flow Organization

After a first selection of triplets by optimal-flow minimum-cost computation (see Section 4) the preferential direction of particle flow is determined in each location of the image by anisotropic vector field filtering of the resulting particle displacements between frames f-1 and f+1 [26]. If the data contains interpenetrating flow fields, the filtering step is preceded by directional clustering and classification of particles into subfields (see Section 4.3). The filtered particle flow fields are employed in subsequent iterations to define an additional term in the cost function that penalizes regional flow incoherence. For each triplet the alignment of the corresponding three-frame track with the filtered particle flow field is determined by the angle βi between local flow direction and the vector from the start to the end of the track (Fig. 1d). By construction of the filtered particle flow, βi is a random variable with zero mean and a symmetric distribution.

3.2.3 Cost Defined by Local and Regional Motion Models

We then assume that the three variables characterizing the local and regional motion models are mutually independent random variables. Thus, the cost of a triplet is expressed as the Mahalanobis distance between [αiii] and the zero mean average of each of the distributions:

(2)

Importantly, the balancing of the weights between the three components of the cost function is entirely data-driven and requires no user-adjustment between different data sets.

4 Optimal-Flow Minimum-Cost Computation

To select from the candidate triplets those fulfilling the optimal-flow minimum-cost condition in Eq. 1 we perform six steps:

  1. Calculate the maximum source-to-sink flow that can be carried by the graph defined in Fig. 1b.

  2. Compute the maximum-flow minimum-cost solution.

  3. Construct the Pareto optimal front (POF) [27] to the bi-objective problem Eq. 1 with equidistant spacing on both objective axes.

  4. Define the optimal-flow from the location on the POF closest to the utopia point and recompute the minimum-cost solution enforcing optimal-flow.

  5. Cluster the track vectors associated with the selected triplets into multiple flow fields and/or filter the tracks to calculate particle flow fields.

  6. Recompute the cost of the triplets in the graph, taking into account the updated particle flow field for computing the cost contribution of the regional motion model and the updated distributions of [α, δ, β] for computing the relative weights between the components in Eq. 2.

Steps 2 – 6 are repeated iteratively until the change in selected triplets is <5%. At each iteration, the maximum-flow minimum-cost solution is recomputed using the updated costs for the triplets and a new POF is constructed. Of note, the set of candidate triplets depends only on the search radius and thus is not affected by cost updates. As discussed in the following sections steps 2 – 4 can be accomplished at near real-time speed on a standard PC, even for several thousand triplets, using linear programming and an approximation of the POF. Generally, three iterations suffice to obtain a solution for which over 95% of the selected triplets are identical to the selection of the previous iteration.

4.1 Maximum-Flow Minimum-Cost Computation to Initialize Optimal-Flow Minimum-Cost Computation

To ensure that the maximum-flow minimum-cost computation in the graph of Fig. 1b does not generate residual capacity in any of the vertices linking the source to the selected triplet, we reformulate the underlying minimum-cost flow problem as a binary programming problem [25]:

(3)

The definition of the sparse adjacency matrix A and of the supply-demand vector b are given in Table 1. The first M elements of the binary solution vector indicate the minimum cost selection of M′ triplets for any given source-to-sink flow 3M′ ≤ 3 through the graph, with i =1 if the triplet is selected and i =0 if it is excluded from the solution. Importantly, by the definition of A each selected triplet receives three flow units from the source and passes one unit on to each of its three associated particles. Flow conservation is maintained for vertices representing individual particles in the three consecutive frames. To supply the graph with sufficient flow for the selection of M′ triplets, we set b1 = 3M′. To enforce flow conservation in all vertices, the graph must be drained completely at the sink, hence the last element bM+N+2 = −3M′, while all other entries of b are zero. Flow conservation in all vertices implies that triplets are never split up.

Table 1

Construction of the adjacency matrix A and the demand vector b.

To determine , i.e. the maximum number of mutually exclusive triplets in the graph, Eq. 3 is first solved with equal cost for all edges and an inequality constraint blAxbu with bl,1 =0,bu,1 =3M and bl,M+N+2 =−3M,bu,M+N+2 =0 defining the upper bound for supply and demand in source and sink. Solving Eq. 3 in a second step with b1 = −bM+N+2 = 3 yields a maximum-flow minimum-cost selection of triplets.

4.2 Multi-Objective Optimization Based on Pareto Optimality between Flow and Cost Reduction

By maximizing the flow prior to identifying the minimum-cost configuration, solutions with a large number of triplets are favored over solutions with fewer triplets, although the additional triplets may be very costly (Fig. 2a). Slight relaxation of the maximum flow condition allows the solution to converge to a selection of triplets with dramatically decreased overall cost (Fig. 2b). Figure 2c shows the cost distribution for triplets in a real- world data example (see Figure 5) for various levels of flow reductions. Starting with the cost distribution produced by the maximum-flow minimum-cost solution, a sub-supply-demand bl=−bm=3M′, M< in Eq. 3 substantially reduces the number of high-cost triplets. Critically, the removal of high-cost triplets rescues low-cost triplets, which under maximum-flow conditions are sacrificed for the benefit of selecting the largest possible number of triplets. Thus, the number of triplets in the lowest-cost bin is higher for all sub-maximum flow conditions. The relationship between flow reduction and cost reduction is non-linear but convex. In the beginning, a slight decrease in flow results in a significant drop of the cost. Beyond a certain optimal flow, the improvement in the total cost is insignificant vis-à-vis the loss of triplets. Removal of too many triplets can even result in elimination of low-cost triplets (Fig. 2c).

Fig. 2

Multi-Objective Optimization. (a) Example where maximizing network flow, i.e. the number of triplets, generates incorrect tracks. (b) Relaxation of the maximum-flow condition generates triplets with much less overall cost. (c) Cost distribution of triplets...

Fig. 5

Tracking bi- and tri-directional microtubule polymer flux in metaphase mitotic spindles of Xenopus leavis egg extracts imaged by fluorescent speckle microscopy (see videos 4 and 5 for raw data): (a) Speckle detection. (b) Three-frame tracks selected by...

The search for the optimal flow defines a bi-objective optimization problem with two competing goals:

(4)

The second goal is equivalent to minimizing the number of triplets R=−M* removed from the maximum-flow solution. Thus, Eq. 4 can be reformulated as a convex single objective problem by applying the weighted sum method [28],

(5)

where α defines the weight between cost reduction and loss of triplets.

We set α−1 = E(cT/) and estimate E(ci) ≈ median(ci), ∀i with i = 1, where denotes the solution of the initial maximum-flow minimum-cost problem; thus, E(i) = E(αci) = 1. Underlying the choice E(ci) ≈ median(ci) is the assumption that the majority of the triplets selected by maximum-flow minimum-cost computation have costs similar to the triplets to be selected by optimal-flow minimum-cost computation. Under this transformation, removal of a triplet with average cost triplet results in equal steps on the axis of flow reduction (Fig. 2d horizontal) and on the axis of total cost reduction (Fig. 2d vertical). Removal of a high-cost triplet yields a larger cost than flow reduction, whereas removal of a low-cost triplet yields less cost than flow reduction. Thus,

(6)

describes the convex POF for the two competing objectives in Eq. 4 (Fig 2d, dashed line). The optimal solution requires that cost and flow reduction are equal, hence

(7)

The solution to Eq. 7 is geometrically defined as the location on the POF closest to the origin, i.e. the Utopia point (Fig. 2d, diamond). For large problems with thousands of particles, computation of the exact POF according to Eq. 6 would be computationally expensive. Thus, we opted to construct an approximation that avoids re-calculating the minimum-cost flow problem for every cost reduction. To achieve this we first rank all triplets in the order of ascending normalized cost. Then, the optimal flow is determined as (Fig. 2d, square)

(8)

The final selection of triplets under optimal-flow minimum-cost conditions is performed by solving Eq. 3 subject to a supply-demand vector with b1 = −bM+N+2 = 3M*.

The approximation requires minimum-cost flow computation only twice. Hence, the optimal-flow minimum-cost triplet assignment is accelerated by two to three orders of magnitude. The distance between the exact and the approximate POF depends on the application. By construction, the exact front is below the approximate front. From this follows, the point on the approximate POF closest to the utopia point is always associated with a larger flow reduction than the exact solution. This means that the approximation never introduces additional false positives over the exact solution, the main objective underlying the design of our algorithm.

4.3 Filtering and Mapping of Particle Flow Fields

For graphical representations of flow fields and to enforce regional flow coherence in the computation of the cost of each triplet (see 3.2.2.) we spatially filter the tail-to-head vectors of the selected optimal-flow minimum-cost triplets. To account for the possibility of interpenetrating flow fields we first identify clusters of vectors with distinct directionality. Initially, the number of clusters and their angular orientation is determined using an expectation-maximization algorithm [29]. For reasons of speed we omit this step for image sequences with a-priori known number of flow. Subsequently, a k-means algorithm, modified to account for the cyclic distribution of vector directions, is used to assign triplets to a particular cluster. Each of the flow vector clusters is smoothed individually by fast anisotropic Gaussian filtering [26]. If the density of vectors per cluster and time-point is insufficient for spatial filtering, but the particle flows are quasi-stationary at the time scale of the movie, optimal-flow minimum-cost triplets from multiple consecutive three-frame tracking solutions are integrated.

5 Results

We first benchmarked the optimal-flow minimum-cost particle tracking on standard particle tracking data sets used in the computer vision literature and then analyzed particle image velocimetry (PIV) benchmarking data sets with known ground truth and high-resolution light microscopy time-lapse sequences of dynamic molecular assemblies. Besides the much higher particle density, these sequences set a significantly harder test case for tracking due to the low SNR of the raw movies, the instability of the particles, and the interpenetration of multiple flow fields.

Table 2 indicates parameters to score the level of difficulty of the presented tracking problems. In all examples we set the search radius two times the mean particle displacement between consecutive frames. According to these parameters the first three examples could be tracked without a motion model. Rapid movement with symmetric motion can be tracked using the local motion model (example 4), while dense (example 5) and interpenetrating (example 6) flows require the use of both local and regional motion models. A particularly critical parameter for scoring the difficulty of a tracking problem is how many particles participate in more than one candidate assignment, i.e. in our case triplets (Table 2, last column). The results in Fig. 3 – 5 indicate the participation distributions for each of the problems.

Fig. 3

Benchmarking on standard particle tracking test data sets used in the computer vision literature. a) Rotating ball; b) Flock of birds; c) School of fish; d) Rotating dish. In all cases three-frame tracks are appended over time-points and overlaid to the...

Table 2

Parameters to score the difficulty of a tracking problem. Mean NN denotes the mean nearest neighbor distance between particles in a frame.

5.1 Validation on Common Benchmark Data Sets for Particle Tracking

In this section we present four tracking examples (data kindly provided by Khurram Shafique) with associated difficulty scores listed in Table 2. The reader is encouraged to compare the results presented in Fig. 3 with those presented in [17]. The ‘rotating ball’ sequence exhibits numerous particle occlusions, leading to false positive triplets. Also, our tracker produces some errors caused by image border effects. For the ‘fish’ and ‘bird’ sequences our algorithm has produced no more than one false positive per ten frames (see Videos 1 and 2). Overall, the tracking results represent well the multidirectional flow of the particle ensemble as well the particular motion of individual particles. In all scenes a small fraction of false positives (≪ 1%) is picked up by the algorithm, associated with random triplets of very low cost, i.e. straight trajectories of constant speed. Although it is not the goal of our algorithm to reconstruct complete trajectories, the high quality of triplets permitted a seamless amendment of triplets extracted from consecutive runs of the algorithm, first on frames 1,2,3, then on 2,3,4, etc. Tracks in the ‘rotating dish’ data [16] follow a ratio of particle mean displacement to nearest neighbor distance roughly three times higher than the first three sequences (Table 2, second column). While for the sequences Fig. 3a – c particles participate in 1-6 candidate triplets, 87% of the particles on the ‘rotating dish’ participate in multiple triplets, some in up to 20 triplets (histogram insets for each panel of Fig. 3). Nevertheless, optimum-flow minimum-cost tracking resolved 99.5% true positives and 0.5% false negatives, respectively (Fig.3d). The false negatives are an inherent consequence of optimal-flow computation in noise-free sequences, where some low cost tracks may be excluded for the benefit of robust rejection of high cost tracks. Indeed, the result does not contain a single false positive.

5.2 Particle Image Velocimetry Data

To further investigate the breakpoint of the algorithm relative to a ground truth we analyzed simulated particle image velocimetry data using the simulation software [30]. We generated images with more than 1300 particles with a mean particle displacement between frames of 4.5 pixels and a maximum particle displacement of 9 pixels. With this, the ratio between average particle displacement and average nearest neighbor distance between particles within a frame was >1 (Table 2), i.e. particles generally move beyond the position of their nearest neighbor particles. This renders the tracking problem significantly more challenging than any of the problems in the previous section. Moreover, to test the performance of the algorithms in rejecting true negatives, we perturbed 50% of the simulated particle positions within the range [-9, 9] pixels to generate tracks that do not belong to the flow field. We set the search radius to 9 pixels. On average, particles participate in 24 candidate triplets, with some particles in as many as 120 (see histogram Figure 4b). To appreciate the difficulty of this tracking problem, the reader is encouraged to watch Video 3. Figure 4a presents the results from optimal-flow minimum-cost tracking. True positives are shown in blue, true negatives in black. Magenta and red colors indicate false negatives and false positives, respectively. The self-adaptive weights of the cost function converged to a solution within 3 iterations.

Fig. 4

(a) Benchmarking on particle-image velocimetry data set, generated by http://piv.vsj.or.jp/piv/java/img2-e.html. The example contains 1300 particles per frame. 50% of all particle coordinates were perturbed. Blue arrows - true positive; black arrows -...

The specificity, i.e. the ratio true negatives to true negatives plus false positives, is 96%. The sensitivity, i.e. the ratio true positive to true positives plus false negatives, is 75%. While this performance seems relatively low at first sight, it is inherently connected to the design of the optimal-flow minimum-cost algorithm. By construction, false positives are avoided at the expense of generating some false negatives. However, in large and dense particle fields the loss of ∼20% of particle trajectories is uncritical for the reconstruction of particle flows, if few of the accepted trajectories have errors. Overall, the high specificity demonstrates a robust performance of the tracker in resolving strongly perturbed, dense particle flows.

5.3 Application in Live Cell Imaging of Sub-Cellular Structural Dynamics

One area of tracking applications where high specificity is preferred over high sensitivity is in light microscopic imaging of sub-cellular structures. Most often, these structures generate diffraction-limited particles of very high density, i.e. the distances between particles are comparable to the particle size proper. A live cell imaging modality that generates such data is Fluorescent Speckle Microscopy (FSM) [31]. Fluorescent speckles are diffraction-limited images of fluorophore clusters that are randomly incorporated into macromolecular assemblies. Figure 5a shows an FSM image of the microtubule polymer of a metaphase meiotic spindle [32], a molecular machine that forms during cell division for the segregation the replicated DNA into two daughter cells. The polymers in this structure undergo bi-polar flux with areas of dense, anti-parallel sliding (see Video 4 for side-by-side comparison of the raw data (left) and the time evolution of the two anti-parallel particle flows (right)). Robust tracking and mapping of this flux is critical for cell biological analyses of the spindle architecture and the mechanisms that mediate DNA segregation. FSM imaging of the meiotic spindle yields time-lapse sequences with hundreds to thousands (>1000 in this example) of point features moving past one another at distances comparable to the size of the point feature.

5.3.1 Scale Invariant Speckle Detection

To identify speckles in each frame of the time-lapse sequence we used a three-step detection approach [33] based on scale-space theory [34]. In the first step we applied a band-pass filter to select the image frequencies associated with the speckles while attenuating frequencies associated with image noise or with longer range variations of the diffuse background. The filter was implemented as a difference of Gaussians (DoG). As speckles represent a diffraction-limited image feature the higher frequency cut-off was defined by a Gaussian G(σ1) with σ1= 0.21λ/NA·Pxy derived from the microscope point spread function [35]. Here, λ denotes the emission wavelength of the fluorescent label, NA the numerical aperture of the objective lens, and Pxy the pixel size in object space. The lower frequency cut-off was defined empirically by a Gaussian G(σ2) with σ2 =1.1·σ1. In the second step we applied unimodal thresholding [36] to the DoG image intensity histogram to remove pixels that are unlikely to belong to speckles. After thresholding candidate speckle images were collected by connected component labeling. To further eliminate from this set of image objects false positives associated with protein aggregates or out-of-focus structures, which usually have a brightness and size different from speckles, we compared in a third step the unfiltered image intensities Ii(x,y) of individual objects against the mean speckle image Ispeckle(x,y) estimated by spatial alignment and averaging of all extracted objects. Objects that significantly deviated from this mean image were detected again by unimodal thresholding of the histogram of the sum of the squared intensity differences between the pixels of individual objects and the pixels of the mean speckle image. As a result of the feature detection procedure, a list of speckle was obtained for each frame including the position of the object centroid (example shown on the lower half of Fig. 5a).

5.3.2 Tracking of Anti-Parallel Speckle Flows

Intrinsic to the mechanism of speckle formation in the spindle these particles are unstable with an average presence in 3 to 4 frames [37]. Three-frame tracks produced by optimal-flow minimum-cost tracking and subsequently separated by motion direction are displayed in Figure 5b. Fig. 5c shows the filtered vector field indicating the average local particle movement over three frames. Of note, the displacement of these particles exceeds the distance between particles and in the mid-zone opposing flows densely interlace. In addition, the density of particles fluxing northward (red) increases towards the north pole of the spindle and vice versa for southward fluxing particles (yellow). Together with the low particle stability, such spatial inhomogeneity adds significant difficulty to the tracking problem.

Figs 5d-f show the distribution of α, δ, and β from which the weights of the local and regional motion models were derived. Interestingly, the distribution of α is significantly broader than that of β, suggesting that the directional changes between consecutive frames are higher for individual tracks than the overall deviation of a track from the general direction of the particle flow. This characteristic can be appreciated in the supplementary video raw data and is related to fluctuation of individual microtubules in an overall stable polymer scaffold. While this behavior was unknown before applying the tracking, and thus could not be enforced by user inputs, the adaptive adjustment of the cost function robustly uncovered this behavior. The weights converged in 3 iterations.

Figs 5h and 5i depict a raw image and the optimal-flow minimum-cost tracking of a tri-polar particle flow field in a sub-region of a meiotic spindle (Video 5 shows the raw images of the spindle (left), and the cropped and enlarged area of the three overlapping particle flow fields (right)). This data is included as an example to illustrate the adjustment of the local and regional motion models to an a priori unknown number of overlapping flow directions. Again, because of the strict rejection of false positives, the iterative classification of particle tracks locked in with three clusters although there is no region in this small field of view with fewer than three motion directions. This permitted the reconstruction of a complex flow configuration, allowing further biophysical analysis of this peculiar polymer dynamics.

6 Conclusion

We have developed an algorithm for the resolution of multiple intersecting and/or interlacing particle flow fields of high density and with low stability of individual particles. The approach captures the instantaneous state of particle flows by consideration of three time points at the time and a self-calibrating cost function, permitting robust tracking in a wide range of applications. The robustness of the tracking relies on i) the use of a single user-defined control parameter, the search radius; ii) fast convergence of an iterative solution that adjusts all other control parameters on-the-fly; and iii) particle appearances and disappearances are handled reliably during particle assignments into tracks. The distinguishing concept of the approach is to explicitly balance the selection of false positive and false negative assignments; in contrast to most existing particle-based tracking methods which implicitly maximize the number of assignments. Generally, this introduces numerous false positive assignments, or it requires the introduction of a cost threshold beyond which assignments are rejected. By performing multi-objective optimization local and regional motion models can be imposed adaptively to ever-changing particle behaviors without over-linking particles.

Practical applications to illustrate the performance of optimal-flow minimum-cost particle tracking were drawn from microscope image sequences capturing the dynamics of molecular machines in cells. These data pose substantial challenges in terms of robustness requirements, complexity of interdigitating flow fields, and spatiotemporal variation of particle behavior within the flow field. Tracking in unstructured crowded scenes has recently gained attention in computer vision for the surveillance of human or vehicle motion in vulnerable areas such as airports, train stations or highways [38]. These problems have very similar characteristics to the cell biological applications discussed in this paper. Although it is outside the scope of this report to formally compare optimal-flow minimum-cost particle tracking in surveillance, it is worth pointing out that the presented methods will bring several strengths to these types of applications. First, in contrast to the algorithm proposed by Ali et al. [39] which is limited to one flow direction per location in a scene, our method can resolve several intermingled flows and it is robust also in presence of a large fraction of stationary or randomly-moving particles. Second, our algorithm requires as few as three frames to capture the changes in the configuration of multidirectional flow fields. If the motion models are known, for instance straight equidistant particle steps between frames, the weights of the cost function can be fixed. This will increase the speed of the solution several-fold compared to our currently iterative cost function adjustment, allowing real-time tracking of particle flow fields. Thus, it could be used to predict on the fly anomalous behaviors or congestions associated with a security alert at an airport terminal or with the arrival of a new train in a highly frequented station. On the other hand, our algorithm relies on an explicit detection of objects for tracking. This may limit its use in scenes with low granularity, a problem that has been successfully addressed by Rodriguez et al. [40] using optical flow methods. Future work will focus on testing the here proposed method on this problem and to combine it with texture segmentation that relax the requirement for discrete particle detection.

Supplementary Material

01

Video Material

Video 1. Bird Sequence

The features are in white. The images and coordinates of detected features were obtained from Khurram Shafique [17]. The vectors resulting from the tracking software are overlaid in yellow. False positive assignments are rare.

Click here to view.(302K, avi)

02

Video 2. Fish Sequence

The features are in white. The images and coordinates of detected features were obtained from Khurram Shafique [17]. The vectors resulting from the tracking software are overlaid in yellow. False positive assignments are rare.

Click here to view.(431K, avi)

03

Video 3. Particle Image Velocimetry

The images (three frames) were generated using the simulation package described in [30]. 50% of the feature coordinates were perturbed. Note the high feature density. The raw data only is shown in the video, the tracks are featured on Fig. 4.

Click here to view.(59K, avi)

04

Video 4. Two Anti-Parallel Flows of Fluorescent Speckles in a Bi-Polar Mitotic Spindle

In mitotic spindles fluorescently labeled microtubule lattices display two flows (groups) of speckles forming a interdigitating pattern. The spindles poles, which are the focal point of the microtubules, are situated at the upper and lower end of the images in preparation for chromosome segregation. The video shows side-by-side comparison of the raw data (left panel) and the time evolution of the two anti-parallel particle flows (right panel).

Click here to view.(4.6M, avi)

05

Video 5. Three Interdigitating Flows of Fluorescent Speckles in a Tri-Polar Mitotic Spindle

In comparison to the example in Video 4, a few microtubules have attached to the cover slip during imaging and have thus formed a (third) flow of speckles toward the lower left corner of the image (left panel). The time evolution of the three tracked flows is shown for cropped and enlarged area in the right panel.

Click here to view.(4.6M, avi)

Acknowledgments

This work is supported by the grant U01 GM 67230 to G.D. The authors would like to thank Khurram Shafique for benchmarking data.

Biographies

Alexandre Matov received his Ph.D. in computational cell biology from the Scripps Research Institute in La Jolla, California in 2009. He has completed postgraduate programs in communication systems at EPF Lausanne and in biophysics at ETH Zurich in Switzerland, and holds a M.S. degree in telecommunications from the Technical University of Varna, Bulgaria. During his dissertation and postdoctoral work, he developed computer vision algorithms for the detection of fluorescent features and particle tracking in video microscopy image sequences. His biomedical research focus has been on quantitative measurements and modeling of cytoskeleton dynamics in living cells. He is a member of IEEE and ASCB.

Marcus Edvall received his M.S. degree in chemical engineering with engineering physics from Chalmers University of Technology (Goteborg, Sweden) in conjunction with Imperial College London in 2000. From 2000 to 2003, he worked as a consultant for the pulp and paper industry with model predictive control systems. Since 2003, he has been the business and development manager of Tomlab Optimization Inc. His primary interest is to accelerate research activities for software development in the area of global black-box optimization and mixed-integer programming.

Ge Yang received his B.S degree in mechanical engineering and B.S. degree in electrical engineering from Tsinghua University (Beijing, China) in 1991. He received his M.S. and Ph.D. degrees in mechanical engineering from the University of Minnesota Twin Cities in 2002 and 2004, respectively. He pursued his postdoctoral training in computational cell biology at the Scripps Research Institute (La Jolla, CA). Since 2009, he is an assistant professor of computational biology and biomedical engineering at Carnegie Mellon University. His research interests are in computational biology, fluorescence imaging, and bioimage informatics.

Gaudenz Danuser received his doctoral degree in computer vision from ETH Zurich, Switzerland. Since his postdoctoral training, he has been interested in the application of image processing and computer vision to the discovery of molecular mechanisms of normal and diseased cell biological functions. He is a professor of cell biology at Harvard Medical School. He has been a member of the editorial boards of IEEE Transactions on Image Processing and the Biophysical Journal between 2003 and 2008. Currently, he serves as a regular member of the NIH study section on Microscopic Imaging.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

1. Barron J, Fleet DJ, Beauchemin S. Performance of Optical Flow Techniques. Int'l J Comp Vision. 1994;12:43–77.

2. Micheli ED, Torre V, Uras S. The accuracy of the computation of optical flow and of the recovery of motion parameters. IEEE Trans Pattern Anal Mach Intell. 1993;15:434–447.

3. Medioni G, Cohen I, Bremond F, Hongeng S, Nevatia R. Event Detection and Analysis from Video Streams. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001;23:873–889.

4. Hu W, Tan T, Wang L, Maybank S. A Survey on Visual Surveillance of Object Motion and Behaviors. IEEE Transactions on Systems, Man, and Cybernetics. 2004;34:334–352.

5. Morris BT, Trivedi MM. A Survey of Vision-Based Trajectory Learning and Analysis for Surveillance. IEEE Transactions on Curcuits and Systems for Video Technology. 2008;18:114–1127.

6. Wu QX. A correlation-relaxation labeling framework for computing optical flow - template matching from a new perspective. IEEE Trans Pattern Anal Mach Intel. 1995;17:843–853.

7. Yilmaz A, Javed O, Shah M. Object tracking: A survey. ACM Computing Surveys. 2006;38

8. Chetverikov D. Applying feature tracking to Particle Image Velocimetry. International Journal of Pattern Recognition and Artificial Intelligence. 2003 Jun;17:487–504.

9. Keane RD, Adrian RJ. Theory of Cross-Correlation Analysis of Piv Images. Appl Sci Res. 1992 Jul;49:191–215.

10. Okamoto K. Particle cluster tracking algorithm in particle image velocimetry. JSME Int'l J Series B. 1998;41:151–154.

11. Kalaidzidis Y. Multiple objects tracking in fluorescence microscopy. Journal of Mathematical Biology. 2009 Jan;58:57–80.[PMC free article][PubMed]

12. Meijering E, Dzyubachyk O, Smal I, van Cappellen WA. Tracking in cell and developmental biology. Seminars in Cell & Developmental Biology. 2009 Oct;20:894–902.[PubMed]

13. Meijering E, Smal I, Danuser G. Tracking in molecular bioimaging. IEEE Signal Proc Mag. 2006 May;23:46–53.

14. Reid DB. An algorithm for tracking multiple targets. IEEE T Automat Contr. 1979;24:843–854.

15. Blackman SS. Multiple Hypothesis Tracking for Multiple Target Tracking. IEEE A&E Systems Magazine. 2004;19:5–18.

16. Veenman CJ, Reinders MJT, Backer E. Resolving Motion Correspondence for Densely Moving Points. IEEE T Pattern Anal. 2001;23:54–72.

17. Shafique K, Shah M. A Noniterative Greedy Algorithm for Multiframe Point Correspondence. IEEE T Pattern Anal. 2005;27:51–65.[PubMed]

18. Chetverikov D, Verestoy J. Feature Point Tracking for Incomplete Trajectories. Computing. 1999;62:321–338.

19.

0 Replies to “Assignment Problem Max Flow Filters”

Lascia un Commento

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *