doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2401.09350 | 111 | There is an even simpler argument to make to show that in high dimensions the search algorithm must visit at least 2d data points during the certification
35
36
4 Branch-and-Bound Algorithms
process. Our argument is as follows. We will show in Lemma 4.1 that, with high probability, the distance between the query point q and a randomly d. This implies that B(q, 뫉) has drawn data point concentrates sharply on a radius that is larger than 1 with high probability. Noting that the side of the unit hypercube is 2, it follows that B(q, 뫉) crosses decision boundaries across every dimension, making it necessary to visit the corresponding partitions for certification.
Finally, because each level of the tree splits on a single dimension, the reasoning above means that the certification process must visit â¦(d) levels of the tree. As a result, we visit at least 2â¦(d) data points. Of course, in high dimensions, we often have far fewer than 2d data points, so that we end up visiting every vector during certification. | 2401.09350#111 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 112 | Lemma 4.1 The distance r between a randomly chosen point and its nearest neighbor among m points drawn uniformly at random from the unit hypercube is O(Vd/m'/4) with probability at least 1â O(1/2*). Proof. Consider the ball of radius r in d-dimensional unit hypercube with volume 1. Suppose, for notational convenience, that d is evenâwhether it is odd or even does not change our asymptotic conclusions. The volume of this ball is:
Ïd/2rd (d/2)! .
Since we have m points in the hypercube, the expected number of points that are contained in the ball of radius r is therefore:
Ïd/2rd (d/2)! m.
As a result, the radius r for which the ball contains one point in expectation is:
d/2.d ld neler d 1 (eo =)! (d/2â " GG) 1 od = = O(a).
Using Stirlingâs formula and letting Î consume the constants and small fac- tors completes the claim that r = | 2401.09350#112 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 113 | Using Stirlingâs formula and letting Î consume the constants and small fac- tors completes the claim that r =
d/m1/d. All that is left is bounding the probability of the event that r takes on the above value. For that, consider first the ball of radius r/2. The probability that this ball contains at least one point is at most 1/2d. To see this, note that the probability that a single point falls into this ball is:
n4/? (7 /2)4 1 qe/ 2d (d/2)! 24 (d/2)! ee 1/m
4.3 Randomized Trees
By the Union Bound, the probability that at least one point out of m points falls into this ball is at most m à 1/(m2d) = 1/2d. | 2401.09350#113 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 114 | By the Union Bound, the probability that at least one point out of m points falls into this ball is at most m à 1/(m2d) = 1/2d.
Next, consider the ball of radius 2r. The probability that it contains no points at all is at most (1 â 24/m)⢠~ exp(â2¢) < 1/2%, where we used the approximation that (1â 1/2)â © exp(â1) and the fact that exp(â2) < 1/z. To see why, it is enough to compute the probability that a single point does not fall into a ball of radius 2r, then by independence we arrive at the joint probability above. That probability is 1 minus the probability that the point falls into the ball, which is itself:
n4/2(27r)4 _ g Trt (d/2)! ~~ (d/2)!â Reger 1/m
hence the total probability (1 â 2d/m)m.
We have therefore shown that the probability that the distance of interest is r is extremely high and in the order of 1 â 1/2d, completing the proof. ââ
# 4.3 Randomized Trees | 2401.09350#114 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 115 | As we explained in Section 4.2, the search algorithm over a k-d Tree âindexâ consists of two operations: A single root-to-leaf traversal of the tree followed by backtracking to certify the candidate solution. As the analysis presented in the same section shows, it is the certification procedure that may need to visit virtually all data points. It is therefore not surprising that Liu et al. [2004] report that, in their experiments with low-dimensional vector collections (up to 30 dimensions), nearly 95% of the search time is spent in the latter phase. That observation naturally leads to the following question: What if we eliminated the certification step altogether? In other words, when given a query q, the search algorithm simply finds the cell that contains q in O(log m/mâ¦) time (where m = |X |), then returns the solution from among the m⦠vectors in that cellâa strategy Liu et al. [2004] call defeatist search. As Panigrahy [2008] shows for uniformly distributed vectors, however, the failure probability of the defeatist method is unacceptably high. That is pri- marily because, when a query | 2401.09350#115 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 116 | uniformly distributed vectors, however, the failure probability of the defeatist method is unacceptably high. That is pri- marily because, when a query is close to a decision boundary, the optimal solution may very well be on the other side. Figure 4.2(a) illustrates this phe- nomenon. As both the construction and search algorithms are deterministic, such a failure scenario is intrinsic to the algorithm and cannot be corrected once the tree has been constructed. Decision boundaries are hard and fast rules. | 2401.09350#116 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 118 | (a) (b)
Fig. 4.2: Randomized construction of k-d Trees for a fixed collection of vectors (filled circles). Decision boundaries take random directions and are planted at a randomly-chosen point near the median. Repeating this procedure results in multiple âindexâ structures of the vector collection. Performing a âdefeatistâ search repeatedly for a given query (the empty circles) then leads to a higher probability of success.
regions as we split an internal node. As another example, we could place the decision boundaries at randomly chosen points close to the median, and have them take a randomly chosen direction. We illustrate the latter in Figure 4.2.
Such randomized decisions mean that, every time we construct a k-d Tree, we would obtain a different index of the data. Furthermore, by building a forest of randomized k-d Trees and repeating the defeatist search algorithm, we may be able to lower the failure probability!
These, as we will learn in this section, are indeed successful ideas that have been extensively explored in the literature [Liu et al., 2004, Ram and Sinha, 2019, Dasgupta and Sinha, 2015].
# 4.3.1 Randomized Partition Trees | 2401.09350#118 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 119 | # 4.3.1 Randomized Partition Trees
Recall that a decision boundary in a k-d Tree is an axis-aligned hyperplane that is placed at the median point of the projection of data points onto a coordinate axis. Consider the following adjustment to that procedure, due originally to Liu et al. [2004] and further refined by Dasgupta and Sinha [2015]. Every time a node whose region is R is to be split, we first draw a random direction u by sampling a vector from the d-dimensional unit sphere, and a scalar β â [1/4, 3/4] uniformly at random. We then project all data points that are in R onto u, and obtain the β-fractile of the projections, θ. u together with θ form the decision boundary. We then proceed as before to partition the data points in R, by following the rule â¨u, v⩠⤠θ for every data point v â R. A node turns into a leaf if it contains a maximum of m⦠vectors.
4.3 Randomized Trees | 2401.09350#119 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 120 | 4.3 Randomized Trees
The procedure above gives us what Dasgupta and Sinha [2015] call a Ran- domized Partition (RP) Tree. You have already seen a visual demonstration of two RP Trees in Figure 4.2. Notice that, by requiring u to be a standard basis vector, fixing one u per each level of the tree, and letting β = 0.5, we reduce an RP Tree to the original k-d Tree.
What is the probability that a defeatist search over a single RP Tree fails to return the correct nearest neighbor? Dasgupta and Sinha [2015] proved that, that probability is related to the following potential function:
Igâ 2 Io 1 $(q,Â¥)=âS oi 4: (4, Â¥) my [quae Iaâ (4.3)
where m = |X |, Ï1 through Ïm are indices that sort data points by increasing distance to the query point q, so that x(Ï1) is the closest data point to q. | 2401.09350#120 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 121 | Notice that, a value of Φ that is close to 1 implies that nearly all data points are at the same distance from q. In that case, as we saw in Chap- ter 2, NN becomes unstable and approximate top-k retrieval becomes meaningless. When Φ is closer to 0, on the other hand, the optimal vector is well-separated from the rest of the collection.
Intuitively then, Φ is reflective of the difficulty or stability of the NN problem for a given query point. It makes sense then, that the probability of failure for q is related to this notion of difficulty of NN search: intuitively, when the nearest neighbor is far from other vectors, a defeatist search is more likely to yield the correct solution.
# 4.3.1.1 A Potential Function to Quantify the Difficulty of NN Search
Before we state the relationship between the failure probability and the po- tential function above more concretely, let us take a detour and understand where the expression for Φ comes from. All the arguments that we are about to make, including the lemmas and theorems, come directly from Dasgupta and Sinha [2015], though we repeat them here using our adopted notation for completeness. We also present an expanded proof of the formal results which follows the original proofs but elaborates on some of the steps. | 2401.09350#121 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 122 | Let us start with a simplified setup where X consists of just two vectors x and y. Suppose that for a query point q, â¥q â xâ¥2 ⤠â¥q â yâ¥2. It turns out that, if we chose a random direction u and projected x and y onto it, then the probability that the projection of y onto u lands somewhere in between the projections of q and x onto u is a function of the potential function of Equation (4.3). The following lemma formalizes this relationship.
39
40
4 Branch-and-Bound Algorithms
Lemma 4.2 Suppose q, x, y â Rd and â¥q â xâ¥2 ⤠â¥q â yâ¥2. Let U â Sdâ1 be â a random direction and define y is between â q and
P[(G<9<%)V(E<9<O]= © arcsin 2 _ (d= %,y- 2) 2 = ares (20101 any) (Westy â as) ) | 2401.09350#122 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 123 | Proof. Assume, without loss of generality, that U is sampled from a d- dimensional standard Normal distribution: U â¼ N (0, Id). That assumption is inconsequential because normalizing U by its L2 norm gives a vector that lies on Sdâ1 as required. But because the norm of U does not affect the argument we need not explicitly perform the normalization.
Suppose further that we translate all vectors by q, and redefine q â 0, x â x â q, and y â y â q. We then rotate the vectors so that x = â¥xâ¥2e1, where e1 is the first standard basis vector. Neither the translation nor the rotation affects pairwise distances, and as such, no generality is lost due to these transformations.
Given this arrangement of vectors, it will be convenient to write U = (U1, U) and y = (y1, y) so as to make explicit the first coordinate of each vector (denoted by subscript 1) and the remaining coordinates (denoted by subscript ). | 2401.09350#123 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 124 | It is safe to assume that y ̸= 0. The reason is that, if that were not the case, the two vectors x and y have an intrinsic dimensionality of 1 and are thus on a line. In that case, no matter which direction U we choose, â y will not fall between â x and â q = 0.
We can now write the probability of the event of interest as follows:
Pl (@<9<*)v(t<9<a)|= a E P[(0.< W,y) < |jnllal2) V ((lalla¥s < (W,y) <0)].
By expanding (U,y) = yiUi + (U,y1); it becomes clear that the ex- pression above measures the probability that (U, W1) falls in the interval (= ylUil, (llell2 â y1)|Ui|) when U, > 0 or (= (la |l2â 1)|Uil,ya|Ua)) oth- erwise. As such P[E] can be rewritten as follows: | 2401.09350#124 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 125 | P(E] =P | - milUil < Wr.) < (lalla â yu) |Ua| | Ur = 0] P [Wr = 0) +P|- (lvll2 â wal < Cas wa) < wlM| | Ti < 0| P[U; <0}.
4.3 Randomized Trees
First, note that U1 is independent of U given that they are sampled from N (0, Id). Second, observe that â¨U, yâ© is distributed as N (0, â¥yâ¥2 2), which is symmetric, so that the two intervals have the same probability mass. These two observations simplify the expression above, so that P[E] becomes:
P(E] =P | âmlUil < W,2) < (ella â w 310A] =P[âulZ/< llnalle2â < (elle - 12] 2"| (- yi (Ia\I2 âH) | yall2â Wavalle 7]? P
where Z and Z â² are independent random variables drawn from N (0, 1).
Using the fact that the ratio of two independent Gaussian random variables follows a standard Cauchy distribution, we can calculate P[E] as follows: | 2401.09350#125 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 126 | Using the fact that the ratio of two independent Gaussian random variables follows a standard Cauchy distribution, we can calculate P[E] as follows:
PIE] [orn dw vi/\ly\ille m(1+w*) 1 (Wel =u) ( yi ) = â |} arctan (| ââââ } â arctan | â T llylla ll~alle 1 x aretan {Halll T lylls â wllalla 1 ccin { lal Ills = vt arcsin z z |: 7 IIyll2 V lylls â 2ys Marla + Halle
In the third equality, we used the fact that arctan a + arctan b = arctan(a + b)/(1 â ab), and in the fourth equality we used the identity arctan a = 1 + a2. Substituting y1 = â¨y, xâ©/â¥xâ¥2 and noting that x and y arcsin a/ ââ have been shifted by q completes the proof.
Corollary 4.1 In the same configuration as in Lemma 4.2: | 2401.09350#126 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 127 | Corollary 4.1 In the same configuration as in Lemma 4.2:
2o(a-temyh - (saa)? < lla â al2lly â alle Pi(q@<9<a)v(e<9<4)| < Hq, {x,y})
Proof. Applying the inequality θ ⥠sin θ ⥠2θ/Ï for 0 ⤠θ ⤠Ï/2 to ââ Lemma 4.2 implies the claim.
Now that we have examined the case of X = {x, y}, it is easy to extend the result to a configuration of m vectors.
41
42
4 Branch-and-Bound Algorithms
Theorem 4.1 Suppose q â Rd, X â Rd is a set of m vectors, and xâ â X the â nearest neighbor of q. Let U â Sdâ1 be a random direction, define v = â¨U, vâ©, and let â X = { â x | x â X }. Then:
Z Ey [fraction of X that is be Zz tween § and "| < =8(q,X). NlR | 2401.09350#127 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 128 | Z Ey [fraction of X that is be Zz tween § and "| < =8(q,X). NlR
Proof. Let Ï1 through Ïm be indices that order the elements of X by increas- ing distance to q, so that xâ = x(Ï1). Denote by Zi the event that â¨U, x(Ïi)â© falls between â xâ and â q. By Corollary 4.1:
P[Zi] ⤠1 2 â¥q â x(Ï1)â¥2 â¥q â x(Ïi)â¥2 .
We can now write the expectation of interest as follows:
m 1 i] < ~®(q, ¥). S59, %)
.
Corollary 4.2 Under the assumptions of Theorem 4.1, for any α â (0, 1) and any s-subset S of X that contains xâ:
Zz 4 1 P [at least a fraction of S is between q and x*] < Ia Ps(4 x), a
where:
=35 lqâ 2 lp a- 2] 2â
and Ï1 through Ïs are indices of the s vectors in X that are closest to q, ordered by increasing distance.
Proof. Apply Theorem 4.1 to the set S to obtain: | 2401.09350#128 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 129 | Proof. Apply Theorem 4.1 to the set S to obtain:
Z Zz E[fraction of S that is between a and x*| < (9,5) < 5s(a.). Nile
Using Markovâs inequality (i.e., P[Z > α] ⤠E[Z]/α) completes the proof. ââ
The above is where the potential function of Equation (4.3) first emerges in its complete form for an arbitrary collection of vectors and its subsets. As we see, Φ bounds the expected number of vectors whose projection onto a random direction U falls between a query point and its nearest neighbor.
ââ
4.3 Randomized Trees
The reason this expected value is important (which subsequently justi- fies the importance of Φ) has to do with the fact that decision bound- aries are planted at some β-fractile point of the projections. As such, a bound on the number of points that fall between q and its nearest neighbor serves as a tool to bound the odds that the decision boundary may separate q from its nearest neighbor, which is the failure mode we wish to quantify.
# 4.3.1.2 Probability of Failure | 2401.09350#129 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 130 | # 4.3.1.2 Probability of Failure
We are now ready to use Theorem 4.1 and Corollary 4.2 to derive the failure probability of the defeatist search over an RP Tree. To that end, notice that, the path from the root to a leaf is a sequence of log1/β(m/mâ¦) independent decisions that involve randomly projected data points. So if we were able to bound the failure probability of a single node, we can apply the union bound and obtain a bound on the failure probability of the tree. That is the intuition that leads to the following result.
Theorem 4.2 The probability that an RP Tree built for collection X of m vectors fails to find the nearest neighbor of a query q is at most:
L 2e S~F jin, 3 Gam
and where we use the shorthand ®,
with β = 3/4 and â = log1/β for Φs(q, X ).
Proof. Consider an internal node of the RP Tree that contains q and s data points including xâ, the nearest neighbor of q. If the decision boundary at this node separates q from xâ, then the defeatist search will fail. We therefore seek to quantify the probability of that event. | 2401.09350#130 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 131 | Denote by F the fraction of the s vectors that, once projected onto the random direction U associated with the node, fall between q and xâ. Recall that, the split threshold associated with the node is drawn uniformly from an interval of mass 1/2. As such, the probability that q is separated from xâ is at most F/(1/2). By integrating over F , we obtain:
43
44
4 Branch-and-Bound Algorithms
1 P lq is separated from x*| < [ P [F = f] is! 2 [Ptr > Aw < 2 [min (1, oat PD, /2 1 @ 2 | df + 2 | df 0 P,/2 2f 2e =@,
â. s "S,
The first equality uses the definition of expectation for a positive random variable, while the second inequality uses Corollary 4.2. Applying the union bound to a path from root to leaf, and noting that the size of the collection that falls into each node drops geometrically per level by a factor of at least ââ 3/4 completes the proof. | 2401.09350#131 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 132 | We are thus able to express the failure probability as a function of Φ, a quantity that is defined for a particular q and a concrete collection of vectors. If we have a model of the data distribution, it may be possible to state more general bounds by bounding Φ itself. Dasgupta and Sinha [2015] demonstrate examples of this for two practical data distributions. Let us review one such example here.
# 4.3.1.3 Data Drawn from a Doubling Measure
Throughout our analysis of k-d Trees in Section 4.2, we considered the case where data points are uniformly distributed in Rd. As we argued in Chapter 3, in many practical situations, however, even though vectors are represented in Rd, they actually lie in some low-dimensional manifold with intrinsic dimen- sion d⦠where d⦠⪠d. This happens, for example, when data points are drawn from a doubling measure with low dimension as defined in Definition 3.1.
Dasgupta and Sinha [2015] prove that, if a collection of m vectors is sam- pled from a doubling measure with dimension dâ¦, then Φ can be bounded from above roughly by (1/m)1/d⦠. The following theorem presents their claim. | 2401.09350#132 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 133 | Theorem 4.3 Suppose a collection X of m vectors is drawn from µ, a con- tinuous, doubling measure on Rd with dimension d⦠⥠2. For an arbitrary δ â (0, 1/2), with probability at least 1 â 3δ, for all 2 ⤠s ⤠m:
4.3 Randomized Trees
2 1"" os) <6( 25) .
Using the result above, Dasgupta and Sinha [2015] go on to prove that, under the same conditions, with probability at least 1 â 3δ, the failure prob- ability of an RP Tree is bounded above by:
Mo ; 5 1/do cold 4 ham) (Smeal )
where c⦠is an absolute constant, and m⦠⥠câ¦3d⦠max(1, ln 1/δ).
The results above tell us that, so long as the space has a small intrinsic dimension, we can make the probability of failing to find the optimal solution arbitrarily small.
# 4.3.2 Spill Trees | 2401.09350#133 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 134 | # 4.3.2 Spill Trees
The Spill Tree [Liu et al., 2004] is another randomized variant of the k-d Tree. The algorithm to construct a Spill Tree comes with a hyperparameter α â [0, 1/2] that is typically a small constant closer to 0. Given an α, the Spill Tree modifies the tree construction algorithm of the k-d Tree as follows. When splitting a node whose region is R, we first project all vectors contained in R onto a random direction U , then find the median of the resulting distribution. However, instead of partitioning the vectors based on which side of the median they are on, the algorithm forms two overlapping sets. The âleftâ set contains all vectors in R whose projection onto U is smaller than the (1/2 + α)-fractile point of the distribution, while the ârightâ set consists of those that fall to the right of the (1/2 â α)-fractile point. As before, a node becomes a leaf when it has a maximum of m⦠vectors. | 2401.09350#134 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 135 | During search, the algorithm performs a defeatist search by routing the query point q based on a comparison of its projection onto the random direc- tion associated with each node, and the median point. It is clear that with this strategy, if the nearest neighbor of q is close to the decision boundary of a node, we do not increase the likelihood of failure whether we route q to the left child or to the right one. Figure 4.3 shows an example of the defeatist search over a Spill Tree.
45
46
4 Branch-and-Bound Algorithms | 2401.09350#135 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 136 | Fig. 4.3: Defeatist search over a Spill Tree. In a Spill Tree, vectors that are close to the decision boundary are, in effect, duplicated, with their copy âspillingâ over to the other side of the boundary. This is depicted for a few example regions as the blue shaded area that straddles the decision boundary: vectors that fall into the shaded area belong to neighboring regions. For example, regions G and H share two vectors. As such, a defeatist search for the example query (the empty circle) looks through not just the region E but its extended region that overlaps with F .
# 4.3.2.1 Space Overhead
One obvious downside of the Spill Tree is that a single data point may end up in multiple leaf nodes, which increases the space complexity. We can quantify that by noting that the depth of the tree on a collection of m vectors is at most log1/(1/2+α)(m/mâ¦), so that the total number of vectors in all leaves is:
. â¢. \ log a2 M y1/(1-log(1+2a Mo2Br/a/2ztay(m/mo) â mo( ) 1/(A/2ta) 2 o( ) /( ( ) Mo Mo
. | 2401.09350#136 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 137 | .
As such, the space complexity of a Spill Tree is O(m1/(1âlog(1+2α))).
# 4.3.2.2 Probability of Failure
The defeatist search over a Spill Tree fails to return the nearest neighbor xâ if the following event takes place at any of the nodes that contains q and xâ. That is the event where the projections of q and xâ are separated by the median and where the projection of xâ is separated from the median by at least α-fraction of the vectors. That event happens when the projections of q and xâ are separated by at least α-fraction of the vectors in some node along the path.
The probability of the event above can be bounded by Corollary 4.2. By applying the union bound to a root-to-leaf path, and noting that the size of the collection reduces at each level by a factor of at least 1/2 + α, we obtain the following result:
4.4 Cover Trees
Theorem 4.4 The probability that a Spill Tree built for collection X of m vectors fails to find the nearest neighbor of a query q is at most:
â4 Ye Pm (a Â¥), (OL l=0
(m/m.).
with β = 1/2 + α and â = log1/β
# 4.4 Cover Trees | 2401.09350#137 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 138 | (m/m.).
with β = 1/2 + α and â = log1/β
# 4.4 Cover Trees
The branch-and-bound algorithms we have reviewed thus far divide a col- lection recursively into exactly two sub-collections, using a hyperplane as a decision boundary. Some also have a certification process that involves back- tracking from a leaf node whose region contains a query to the root node. As we noted in Section 4.1, however, none of these choices is absolutely nec- essary. In fact, branching and bounding can be done entirely differently. We review in this section a popular example that deviates from that pattern, a data structure known as the Cover Tree [Beygelzimer et al., 2006].
It is more intuitive to describe the Cover Tree, as well as the construction and search algorithms over it, in the abstract first. This is what Beygelzimer et al. [2006] call the implicit representation. Let us first describe its structure, then review its properties and explain the relevant algorithms, and only then discuss how the abstract tree can be implemented concretely.
# 4.4.1 The Abstract Cover Tree and its Properties | 2401.09350#138 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 139 | # 4.4.1 The Abstract Cover Tree and its Properties
The abstract Cover Tree is a tree structure with infinite depth that is defined for a proper metric δ(·, ·). Each level of the tree is numbered by an integer that starts from â at the level of the root node and decrements, to ââ, at each subsequent level. Each node represents a single data point. If we denote the collection of nodes on level â by Câ, then Câ is a set, in the sense that the data points represented by those nodes are distinct. But Câ â Cââ1, so that once a node appears in level â, it necessarily appears in levels (â â 1) onward. That implies that, in the abstract Cover Tree, Câ contains a single data point, and Cââ = X is the entire collection.
This structure, which is illustrated in Figure 4.4 for an example collection of vectors, obeys three invariants. That is, all algorithms that construct the tree or manipulate it in any way must guarantee that the three properties are not violated. These invariants are: ⢠Nesting: As we noted, Câ â Cââ1.
47
48
4 Branch-and-Bound Algorithms
b= eat £20 | 2401.09350#139 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 140 | 47
48
4 Branch-and-Bound Algorithms
b= eat £20
Fig. 4.4: Illustration of the abstract Cover Tree for a collection of 8 vectors. Nodes on level â of the tree are separated by at least 2â by the separation invariant. Nodes on level â cover nodes on level (â â 1) with a ball of radius at most 2â by the covering invariant. Once a node appears in the tree, it will appear on all subsequent levels as its own child (solid arrows), by the nesting invariant.
# Algorithm 1: Nearest Neighbor search over a Cover Tree.
Input: Cover Tree with metric δ(·, ·); query point q. Result: Exact NN of q. 1: Qâ â Câ ; â· Câ is the set of nodes on level â 2: for â from â to ââ do 3:
Q â {Children(v) | v â Qâ} ; â· Children(·) returns the children of its argument. Qââ1 â {u | δ(q, u) ⤠δ(q, Q) + 2â} ; ⷠδ(u, S) â minvâS δ(u, v). | 2401.09350#140 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 141 | 4: 5: end for 6: return arg minuâQââ
# δ(q, u)
⢠Covering: For every node u â Cââ1 there is a node v â Câ such that δ(u, v) < 2â. In other words, every node in the next level (â â 1) of the tree is âcoveredâ by an open ball of radius 2â around a node in the current level, â.
Separation: All nodes on the same level â are separated by a distance of
at least 2â. Formally, if u, v â Câ, then δ(u, v) > 2â.
# 4.4.2 The Search Algorithm
We have seen what a Cover Tree looks like and what properties it is guaran- teed to maintain. Given this structure, how do we find the nearest neighbor of a query point? That turns out to be a fairly simple algorithm as shown in Algorithm 1.
4.4 Cover Trees | 2401.09350#141 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 142 | 4.4 Cover Trees
Algorithm 1 always maintains a current set of candidates in Qâ as it visits level â of the tree. In each iteration of the loop on Line 2, it creates a tem- porary setâdenoted by Qâby collecting the children of all nodes in Qâ. It then prunes the nodes in Q based on the condition on Line 4. Eventually, the algorithm returns the exact nearest neighbor of query q by performing exhaustive search over the nodes in Qââ.
Let us understand why the algorithm is correct. In a way, it is enough to argue that the pruning condition on Line 4 never discards an ancestor of the nearest neighbor. If that were the case, we are done proving the correctness of the algorithm: Qââ is guaranteed to have the nearest neighbor, at which point we will find it on Line 6. | 2401.09350#142 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 143 | The fact that Algorithm 1 never prunes the ancestor of the solution is easy to establish. To see how, consider the distance between u ⬠Ce_; and any of its descendants, v. The distance between the two vectors is bounded as follows: d(u,v) < 4 2! â 2°. Furthermore, because 6 is proper, by triangle inequality, we know that: 6(q,u*) < 6(¢,Q) + 6(Q,u*), where u* is the solution and a descendant of wu ⬠Cy_1. As such, any candidate whose distance is greater than 5(q, Q)+6(Q, u*) < 6(q,Q)+2° can be safely pruned.
The search algorithm has an ϵ-approximate variant too. To obtain a solu- tion that is at most (1 + ϵ)δ(q, uâ) away from q, assuming uâ is the optimal solution, we need only to change the termination condition on Line 2, by exiting the loop as soon as δ(q, Qâ) ⥠2â+1(1 + 1/ϵ). Let us explain why the resulting algorithm is correct. | 2401.09350#143 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 144 | Suppose that the algorithm terminates early when it reaches level â. That means that 2â+1(1 + 1/ϵ) ⤠δ(q, Qâ). We have already seen that δ(q, Qâ) ⤠2â+1, and by triangle inequality, that δ(q, Qâ) ⤠δ(q, uâ) + 2â+1. So we have bounded δ(q, Qâ) from below and above, resulting in the following inequality:
1 gett (1 + -) < 5(qut) +2! => 2! < 66(q,u*). â¬
Putting all that together, we have shown that δ(q, Qâ) ⤠(1 + ϵ)δ(q, uâ), so that Line 6 returns an ϵ-approximate solution.
# 4.4.3 The Construction Algorithm
Inserting a single vector into the Cover Tree âindexâ is a procedure that is similar to the search algorithm but is better conceptualized recursively, as shown in Algorithm 2. | 2401.09350#144 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 145 | Inserting a single vector into the Cover Tree âindexâ is a procedure that is similar to the search algorithm but is better conceptualized recursively, as shown in Algorithm 2.
It is important to note that the procedure in Algorithm 2 assumes that the point p is not present in the tree. That is a harmless assumption as the existence of p can be checked by a simple invocation of Algorithm 1. We can therefore safely assume that δ(p, Q) for any Q formed on Line 1 is
49
50
4 Branch-and-Bound Algorithms
# Algorithm 2: Insertion of a vector into a Cover Tree.
Input: Cover Tree T with metric δ(·, ·); New Vector p; Level â; Candidate set Qâ. Result: Cover Tree containing p. 1: Q â {Children(u) | u â Qâ} 2: if δ(p, Q) > 2â then 3: return â·â 4: else 5: 6: 7: 8: 9: 10: 11: 12: 13: end if | 2401.09350#145 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 146 | strictly positive. That assumption guarantees that the algorithm eventually terminates. That is because δ(p, Q) > 0 so that ultimately we will invoke the algorithm with a value â such that δ(p, Q) > 2â, at which point Line 2 terminates the recursion.
We can also see why Line 6 is bound to evaluate to True at some point during the execution of the algorithm. That is because there must exist a level â such that 2ââ1 < δ(p, Q) ⤠2â. That implies that the point p will ultimately be inserted into the tree.
What about the three invariants of the Cover Tree? We must now show that the resulting tree maintains those properties: nesting, covering, and sepa- ration. The covering invariant is immediately guaranteed as a result of Line 6. The nesting invariant too is trivially maintained because we can insert p as its own child for all subsequent levels. | 2401.09350#146 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 147 | What remains is to show that the insertion algorithm maintains the sep- aration property. To that end, suppose p has been inserted into Cââ1 and consider its sibling u â Cââ1. If u â Q, then it is clear that δ(p, u) > 2ââ1 because Line 6 must have evaluated to True. On the other hand, if u /â Q, that means that there was some ââ² > â where some ancestor of u, uâ² â Cââ²â1, was pruned on Line 5, so that δ(p, uâ²) > 2ââ² . Using the covering invariant, we can deduce that:
£ 5(p,u) > 5(p,u') â Ss 2! l=¢'-1 = (pw) â (2 = 2°) > 2â â (2 â 2%) = 28.
4.5 Closing Remarks
That concludes the proof that δ(p, Cââ1) > 2ââ1, showing that Algorithm 2 maintains the separation invariant.
# 4.4.4 The Concrete Cover Tree | 2401.09350#147 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 148 | # 4.4.4 The Concrete Cover Tree
The abstract tree we described earlier has infinite depth. While that repre- sentation is convenient for explaining the data structure and algorithms that operate on it, it is not practical. But it is easy to derive a concrete instance of the data structure, without changing the algorithmic details, to obtain what Beygelzimer et al. [2006] call the explicit representation.
One straightforward way of turning the abstract Cover Tree into a concrete one is by turning a node into a (terminal) leaf if it is its only childârecall that, a node in the abstract Cover Tree is its own child, indefinitely. For example, in Figure 4.4, all nodes on level 0 would become leaves and the Cover Tree would end at that depth. We leave it as an exercise to show that the concrete representation of the tree does not affect the correctness of Algorithms 1 and 2. | 2401.09350#148 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 149 | The concrete form is not only important for making the data structure practical, it is also necessary for analysis. For example, as Beygelzimer et al. [2006] prove that the space complexity of the concrete Cover Tree is O(m) with m = |X |, whereas the abstract form is infinitely large. The time com- plexity of the insertion and search algorithms also use the concrete form, but they further require assumptions on the data distribution. Beygelzimer et al. [2006] present their analysis for vectors that are drawn from a doubling mea- sure, as we have defined in Definition 3.1. However, their claims have been disputed [Curtin, 2016] by counter-examples [Elkin and Kurlin, 2022], and corrected in a recent work [Elkin and Kurlin, 2023].
# 4.5 Closing Remarks
This chapter has only covered algorithms that convey the foundations of a branch-and-bound approach to NN search. Indeed, we left out a number of alternative constructions that are worth mentioning as we close this chapter.
# 4.5.1 Alternative Constructions and Extensions
The standard k-d Tree itself, as an example, can be instantiated by using a different splitting procedure, such as splitting on the axis along which the data exhibits the greatest spread. PCA Trees [Sproull, 1991], PAC Trees [Mc51
52 | 2401.09350#149 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 150 | 52
4 Branch-and-Bound Algorithms
Names, 2001], and Max-Margin Trees [Ram et al., 2012] offer other ways of choosing the axis or direction along which the algorithm partitions the data. Vantage-point Trees [Yianilos, 1993], as another example, follow the same iterative procedure as k-d Trees, but partition the space using hyperspheres rather than hyperplanes.
There are also various other randomized constructions of tree index struc- tures for NN search. Panigrahy [2008], for instance, construct a standard k-d Tree over the original data points but, during search, perturb the query point. Repeating the perturb-then-search scheme reduces the failure probability of a defeatist search over the k-d Tree.
Sinha [2014] proposes a different variant of the RP Tree where, instead of a random projection, they choose the principal direction corresponding to the largest eigenvalue of the covariance of the vectors that fall into a node. This is equivalent to the PAC Tree [McNames, 2001] with the exception that the splitting threshold (i.e., the β-fractile point) is chosen randomly, rather than setting it to the median point. Sinha [2014] shows that, with the modified algorithm, a smaller ensemble of trees is necessary to reach high retrieval accuracy, as compared with the original RP Tree construction. | 2401.09350#150 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 151 | Sinha and Keivani [2017] improve the space complexity of RP Trees by replacing the d-dimensional dense random direction with a sparse random projection using Fast Johnson-Lindenstrauss Transform [Ailon and Chazelle, 2009]. The result is that, every internal node of the tree has to store a sparse vector whose number of non-zero coordinates is far less than d. This space- efficient variant of the RP Tree offers virtually the same theoretical guarantees as the original RP Tree structure.
Ram and Sinha [2019] improve the running time of the NN search over an RP Tree (which is O(d log m) for m = |X |) by first randomly rotating the vectors in a pre-processing step, then applying the standard k-d Tree to the rotated vectors. They show that, such a construction leads to a search time complexity of O(d log d + log m) and offers the same guarantees on the failure probability as the RP Tree. | 2401.09350#151 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 152 | Cover Trees too have been the center of much research. As we have al- ready mentioned, many subsequent works [Elkin and Kurlin, 2022, 2023, Curtin, 2016] investigated the theoretical results presented in the original paper [Beygelzimer et al., 2006] and corrected or improved the time com- plexity bounds on the insertion and search algorithms. Izbicki and Shelton [2015] simplified the structure of the concrete Cover Tree to make its imple- mentation more efficient and cache-aware. Gu et al. [2022] proposed parallel insertion and deletion algorithms for the Cover Tree to scale the algorithm to real-world vector collections. We should also note that the Cover Tree itself is an extension (or, rather, a simplification) of Navigating Nets [Krauthgamer and Lee, 2004], which itself has garnered much research.
It is also possible to extend the framework to MIPS. That may be surpris- ing. After all, the machinery of the branch-and-bound framework rests on the
4.5 Closing Remarks
assumption that the distance function has all the nice properties we expect from a metric space. In particular, we take for granted that the distance is non-negative and that distances obey the triangle inequality. As we know, however, none of these properties holds when the distance function is inner product. | 2401.09350#152 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 153 | As Bachrach et al. [2014] show, however, it is possible to apply a rank- preserving transformation to vectors such that solving MIPS over the original space is equivalent to solving NN over the transformed space. Ram and Gray [2012] take a different approach and derive bounds on the inner product between an arbitrary query point and vectors that are contained in a ball associated with an internal node of the tree index. This bound allows the certification process to proceed as usual. Nonetheless, these methods face the same challenges as k-d Trees and their variants.
# 4.5.2 Future Directions
The literature on branch-and-bound algorithms for top-k retrieval is rather mature and stable at the time of this writing. While publications on this fasci- nating class of algorithms continue to date, most recent works either improve the theoretical analysis of existing algorithms (e.g., [Elkin and Kurlin, 2023]), improve their implementation (e.g., [Ram and Sinha, 2019]), or adapt their implementation to other computing paradigms such as distributed systems (e.g., [Gu et al., 2022]). | 2401.09350#153 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 154 | Indeed, such research is essential. Tree indices areâas the reader will un- doubtedly learn after reading this monographâamong the few retrieval al- gorithms that rest on a sound theoretical foundation. Crucially, their imple- mentations too reflect those theoretical principles: There is little to no gap between theoretical tree indices and their concrete forms. Improving their the- oretical guarantees and modernizing their implementation, therefore, makes a great deal of sense, especially so because works like [Ram and Sinha, 2019] show how competitive tree indices can be in practice. | 2401.09350#154 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 155 | An example area that has received little attention concerns the data struc- ture that materializes a tree index. In most works, trees appear in their na¨ıve form and are processed trivially. That is, a tree is simply a collection of if-else blocks, and is evaluated from root to leaf, one node at a time. The vectors in the leaf of a tree, too, are simply searched exhaustively. Importantly, the knowledge that one tree is often insufficient and that a forest of trees is often necessary to reach an acceptable retrieval accuracy, is not taken advantage of. This insight was key in improving forest traversal in the learning-to-rank literature [Lucchese et al., 2015, Ye et al., 2018], in particular when a batch of queries is to be processed simultaneously. It remains to be seen if a more efficient tree traversal algorithm can unlock the power of tree indices.
53
54
4 Branch-and-Bound Algorithms
Perhaps more importantly, the algorithms we studied in this chapter give us an arsenal of theoretical tools that may be of independent interest. The concepts such as partitioning, spillage, and ϵ-nets that are so critical in the development of many of the algorithms we saw earlier, are useful not only in the context of trees, but also in other classes of retrieval algorithms. We will say more on that in future chapters.
# References | 2401.09350#155 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 156 | # References
N. Ailon and B. Chazelle. The fast johnsonâlindenstrauss transform and approximate nearest neighbors. SIAM Journal on Computing, 39(1):302â 322, 2009.
Y. Bachrach, Y. Finkelstein, R. Gilad-Bachrach, L. Katzir, N. Koenigstein, N. Nice, and U. Paquet. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. In Proceedings of the 8th ACM Conference on Recommender Systems, page 257â264, 2014.
J. L. Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509â517, 9 1975.
A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In Proceedings of the 23rd International Conference on Machine Learning, page 97â104, 2006.
P. Ciaccia, M. Patella, and P. Zezula. M-tree: An efficient access method for similarity search in metric spaces. In Proceedings of the 23rd International Conference on Very Large Data Bases, page 426â435, 1997. | 2401.09350#156 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 157 | K. L. Clarkson. Nearest neighbor queries in metric spaces. In Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing, pages 609â617, 1997.
R. R. Curtin. Improving dual-tree algorithms. PhD thesis, Georgia Institute of Technology, Atlanta, GA, USA, 2016.
S. Dasgupta and K. Sinha. Randomized partition trees for nearest neighbor search. Algorithmica, 72(1):237â263, 5 2015.
Y. Elkin and V. Kurlin. Counterexamples expose gaps in the proof of time complexity for cover trees introduced in 2006. In 2022 Topological Data Analysis and Visualization, pages 9â17, Los Alamitos, CA, 10 2022.
Y. Elkin and V. Kurlin. A new near-linear time algorithm for k-nearest neighbor search using a compressed cover tree. In Proceedings of the 40th International Conference on Machine Learning, 2023.
J. H. Friedman, J. L. Bentley, and R. A. Finkel. An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software, 3(3):209â226, 9 1977. | 2401.09350#157 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 158 | Y. Gu, Z. Napier, Y. Sun, and L. Wang. Parallel cover trees and their ap- plications. In Proceedings of the 34th ACM Symposium on Parallelism in Algorithms and Architectures, pages 259â272, 2022.
References
M. Izbicki and C. Shelton. Faster cover trees. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1162â1170, Lille, France, 07â09 Jul 2015.
D. R. Karger and M. Ruhl. Finding nearest neighbors in growth-restricted metrics. In Proceedings of the 34th Annual ACM Symposium on Theory of Computing, pages 741â750, 2002.
R. Krauthgamer and J. R. Lee. Navigating nets: Simple algorithms for prox- imity search. In Proceedings of the 15th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 798â807, 2004.
T. Liu, A. W. Moore, A. Gray, and K. Yang. An investigation of practical approximate nearest neighbor algorithms. In Proceedings of the 17th In- ternational Conference on Neural Information Processing Systems, pages 825â832, 2004. | 2401.09350#158 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 159 | C. Lucchese, F. M. Nardini, S. Orlando, R. Perego, N. Tonellotto, and R. Ven- turini. Quickscorer: A fast algorithm to rank documents with additive en- sembles of regression trees. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 73â82, 2015.
J. McNames. A fast nearest-neighbor algorithm based on a principal axis search tree. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 23(9):964â976, 2001.
R. Panigrahy. An improved algorithm finding nearest neighbor using kd-trees. In LATIN 2008: Theoretical Informatics, pages 387â398, 2008.
P. Ram and A. G. Gray. Maximum inner-product search using cone trees. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 931â939, 2012.
P. Ram and K. Sinha. Revisiting kd-tree for nearest neighbor search. In Pro- ceedings of the 25th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 1378â1388, 2019. | 2401.09350#159 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 160 | P. Ram, D. Lee, and A. G. Gray. Nearest-neighbor search on a time bud- get via max-margin trees. In Proceedings of the 2012 SIAM International Conference on Data Mining, pages 1011â1022, 2012.
K. Sinha. Lsh vs randomized partition trees: Which one to use for nearest neighbor search? In Proceedings of the 13th International Conference on Machine Learning and Applications, pages 41â46, 2014.
K. Sinha and O. Keivani. Sparse Randomized Partition Trees for Nearest Neighbor Search. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 681â689, 20â22 Apr 2017.
R. F. Sproull. Refinements to nearest-neighbor searching ink-dimensional trees. Algorithmica, 6(1):579â589, 6 1991.
T. Ye, H. Zhou, W. Y. Zou, B. Gao, and R. Zhang. Rapidscorer: Fast tree ensemble evaluation by maximizing compactness in data level paralleliza55
56
4 Branch-and-Bound Algorithms
tion. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 941â950, 2018. | 2401.09350#160 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 161 | 56
4 Branch-and-Bound Algorithms
tion. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 941â950, 2018.
P. N. Yianilos. Data structures and algorithms for nearest neighbor search In Proceedings of the 4th Annual ACM-SIAM in general metric spaces. Symposium on Discrete Algorithms, pages 311â321, 1993.
Chapter 5 Locality Sensitive Hashing
Abstract In the preceding chapter, we delved into algorithms that inferred the geometrical shape of a collection of vectors and condensed it into a navi- gable structure. In many cases, the algorithms were designed for exact top-k retrieval, but could be modified to provide guarantees on approximate search. This section, instead, explores an entirely different idea that is probabilistic in nature and, as such, is designed specifically for approximate top-k retrieval from the ground up.
# 5.1 Intuition | 2401.09350#161 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 162 | # 5.1 Intuition
Let us consider the intuition behind what is known as Locality Sensitive Hashing (LSH) [Indyk and Motwani, 1998] first. Define b separate âbuckets.â Now, suppose there exists a mapping h(·) from vectors in Rd to these buckets, such that every vector is placed into a single bucket: h : Rd â [b]. Crucially, assume that vectors that are closer to each other according to the distance function δ(·, ·), are more likely to be placed into the same bucket. In other words, the probability that two vectors collide increases as δ decreases.
Considering the setup above, indexing is simply a matter of applying h to all vectors in the collection X and making note of the resulting placements. Retrieval for a query q is also straightforward: Perform exact search over the data points that are in the bucket h(q). The reason this procedure works with high probability is because it is more likely for the mapping h to place q in a bucket that contains its nearest neighbors, so that an exact search over the h(q) bucket yields the correct top-k vectors with high likelihood. This is visualized in Figure 5.1(a). | 2401.09350#162 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 163 | It is easy to extend this setup to âmulti-dimensionalâ buckets in the fol- lowing sense. If hiâs are independent functions that have the desired prop- erty above (i.e., increased chance of collision with smaller δ), we may define a bucket in [b]â as the vector mapping g(·) = [h1(·), h2(·), . . . , hâ(·)]. Fig57
58
5 Locality Sensitive Hashing
C7
w (0) Pe SX Oo Yo | \ @/ iO | 8 @e® lo | J ig | / e \ 2| | âup| OS PS 7 HE
(a) (b) | 2401.09350#163 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 164 | (a) (b)
Fig. 5.1: Illustration of Locality Sensitive Hashing. In (a), a function h : R2 â {1, 2, 3, 4} maps vectors to four buckets. Ideally, when two points are closer to each other, they are more likely to be placed in the same bucket. But, as the dashed arrows show, some vectors end up in less-than-ideal buckets. When retrieving the top-k vectors for a query q, we search through the data vectors that are in the bucket h(q). Figure (b) depicts an extension of the framework where each bucket is the vector [h1(·), h2(·)] obtained from two independent mappings h1 and h2.
ure 5.1(b) illustrates this extension for â = 2. The indexing and search proce- dures work in much the same way. But now, there are presumably fewer data points in each bucket, and spurious collisions (i.e., vectors that were mapped to the same bucket but that are far from each other according to δ) are less likely to occur. In this way, we are likely to reduce the overall search time and increase the accuracy of the algorithm. | 2401.09350#164 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 165 | Extending the framework even further, we can repeat the process above L times by constructing independent mappings g1(·) through gL(·) from indi- vidual mappings hij(·) (1 ⤠i ⤠L and 1 ⤠j ⤠â), all of which possessing the property of interest. Because the mappings are independent, repeating the procedure many times increases the probability of obtaining a high retrieval accuracy.
That is the essence of the LSH approach to top-k retrieval. Its key ingredi- ent is the family H of functions hijâs that have the stated property for a given distance function, δ. This is the detail that is studied in the remainder of this section. But before we proceed to define H for different distance functions, we will first give a more rigorous description of the algorithm.
5.2 Top-k Retrieval with LSH
# 5.2 Top-k Retrieval with LSH
Earlier, we described informally the class of mappings that are at the core of LSH, as hash functions that preserve the distance between points. That is, the likelihood that such a hash function places two points in the same bucket is a function of their distance. Let us formalize that notion first in the following definition, due to Indyk and Motwani [1998]. | 2401.09350#165 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 166 | Definition 5.1 ((r, (1 + ϵ)r, p1, p2)-Sensitive Family) A family of hash functions H = {h : Rd â [b]} is called (r, (1 + ϵ)r, p1, p2)-sensitive for a distance function δ(·, ·), where ϵ > 0 and 0 < p1, p2 < 1, if for any two points u, v â Rd: ⢠δ(u, v) ⤠r =â PH ⢠δ(u, v) > (1 + ϵ)r =â PH
© S(u,v) <r => Py [A(u) = h(v)] > pi; and, © S(u,v) >(1+e)r => Px [h(u) = h(v)] < pr. It is clear that such a family is useful only when p; > po. We will see examples of H for different distance functions later in this section. For the time being, however, suppose such a family of functions exists for any 6 of interest. | 2401.09350#166 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 167 | The indexing algorithm remains as described before. Fix parameters ¢ and L to be determined later in this section. Then define the vector function g() = [hi(), ha(-),---, he(-)] where h; ⬠H. Now, construct L such functions g, through g,, and process the data points in collection Â¥ by evaluating g;âs and placing them in the corresponding multi-dimensional bucket.
In the end, we have effectively built L tables, each mapping buckets to a list of data points that fall into them. Note that, each of the L tables holds a copy of the collection, but where each table organizes the data points differently.
# 5.2.1 The Point Location in Equal Balls Problem
Our intuitive description of retrieval using LSH ignored a minor technicality that we must elaborate in this section. In particular, as is clear from Defi- nition 5.1, a family H has a dependency on the distance r. That means any instance of the family provides guarantees only with respect to a specific r. Consequently, any index obtained from a family H, too, is only useful in the context of a fixed r. | 2401.09350#167 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 168 | It appears, then, that the LSH index is not in and of itself sufficient for solving the ϵ-approximate retrieval problem of Definition 1.2 directly. But, it is enough for solving an easier decision problem that is known as Point Location in Equal Balls (PLEB), defined as follows:
59
60
5 Locality Sensitive Hashing
Definition 5.2 ((r, (1+ϵ)r)-Point Location in Equal Balls) For a query point q and a collection X , if there is a point u â X such that δ(q, u) ⤠r, return Yes and any point v such that δ(q, v) < (1 + ϵ)r. Return No if there are no such points.
The algorithm to solve the (r, (1 + ϵ)r)-PLEB problem for a query point q is fairly straightforward. It involves evaluating giâs on q and exhaustively searching the corresponding buckets in order. We may terminate early af- ter visiting at most 4L data points. For every examined data point u, the algorithm returns Yes if δ(q, u) ⤠(1 + ϵ)r, and No otherwise.
# 5.2.1.1 Proof of Correctness | 2401.09350#168 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 169 | # 5.2.1.1 Proof of Correctness
Suppose there exits a point u* ⬠¥ such that 6(g,u*) < r. The algorithm above is correct, in the sense that it returns a point u with d(q,u) < (L+e)r, if we choose ¢ and L such that the following two properties hold with constant probability: e Jie [L] s.t. gi(u*) = gi(q); and, e ya [(*\ Bea (1+e)r yn) 9; Mo, < 4L, where g; '(g,(q)) is the set
e ya [(*\ Bea (1+e)r yn) 9; Mo, < 4L, where g; '(g,(q)) is the set of vectors in bucket g;(q).
The first property ensures that, as we traverse the L buckets associated with the query point, we are likely to visit either the optimal point uâ, or some other point whose distance to q is at most (1 +ϵ)r. The second property guarantees that with constant probability, there are no more than 4L points in the candidate buckets that are (1 + ϵ)r away from q. As such, we are likely to find a solution before visiting 4L points. | 2401.09350#169 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 170 | We must therefore prove that for some â and L the above properties hold. The following claim shows one such configuration.
Theorem 5.1 Let Ï = ln p1/ ln p2 and m = |X |. Set L = mÏ and â = log1/p2 m. The properties above hold with constant probability for a (r, (1 + ϵ)r, p1, p2)-sensitive LSH family.
Proof. Consider the first property. We have, from Definition 5.1, that, for any hi â H:
P [hi(u') = hi(q)| > pi.
That holds simply because uâ â B(q, r). That implies:
P [si(u") = ai(a)| > pi
As such:
P [3 i ⬠[L] s.t. gi(u*) = ai(a)| >1-(1- pi).
5.2 Top-k Retrieval with LSH
Substituting â and L with the expressions given in the theorem gives:
: we) Line 1 P[3ie [L] s.t. gi(u )=gila)| 2 1-(Q-7 7) w1- 7,
proving that the property of interest holds with constant probability. | 2401.09350#170 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 171 | proving that the property of interest holds with constant probability.
Next, consider the second property. For any point v such that δ(q, v) > (1 + ϵ)r, Definition 5.1 tells us that:
P [hi(v) = hila)] <p2 â> P[gi(v) = gi(a)] < v5 P [gi(w) x(a) <+ m =e v st gi(v) = gig) (4,0) > (1+ Or] [gi] <1 =e v s.t. gi(v) = gi(q) Ad(q,v) > A+ orl] <L,
where the last expression follows by the linearity of expectation when applied to all L buckets. By Markovâs inequality, the probability that there are more than 4L points for which δ(q, v) > (1 + ϵ)r but that map to the same bucket ââ as q is at most 1/4. That completes the proof.
# 5.2.1.2 Space and Time Complexity
The algorithm terminates after visiting at most 4L vectors in the candidate buckets. Given the configuration of Theorem 5.1, this means that the time complexity of the algorithm for query processing is O(dmÏ), which is sub- linear in m. | 2401.09350#171 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 172 | As for space complexity of the algorithm, note that the index stores each data point L times. That implies the space required to build an LSH index has complexity O(mL) = O(m1+Ï), which grows super-linearly with m. This growth rate can easily become prohibitive [Gionis et al., 1999, Buhler, 2001], particularly because it is often necessary to increase L to reach a higher accuracy, as the proof of Theorem 5.1 shows. How do we reduce this overhead and still obtain sub-linear query time? That is a question that has led to a flurry of research in the past.
One direction to address that question is to modify the search algorithm so that it visits multiple buckets from each of the L tables, instead of examining just a single bucket per table. That is the idea first explored by Panigrahy [2006]. In that work, the search algorithm is the same as in the standard version presented above, but in addition to searching the buckets for query q, it also performs many search operations for perturbed copies of q. While theoretically interesting, their method proves difficult to use in practice. That is because, the amount of noise needed to perturb a query depends on the distance of the nearest neighbor to qâa quantity that is unknown a priori.
61
62
5 Locality Sensitive Hashing | 2401.09350#172 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 173 | 61
62
5 Locality Sensitive Hashing
Additionally, it is likely that a single bucket may be visited many times over as we invoke the search procedure on the copies of q.
Later, Lv et al. [2007] refined that theoretical result and presented a method that, instead of perturbing queries randomly and performing mul- tiple hash computations and search invocations, utilizes a more efficient ap- proach in deciding which buckets to probe within each table. In particular, their âmulti-probe LSHâ first finds the bucket associated with q, say gi(q). It then additionally visits other âadjacentâ buckets where a bucket is adjacent if it is more likely to hold data points that are close to the vectors in gi(q). The precise way their algorithm arrives at a set of adjacent buckets de- pends on the hash family itself. In their work, Lv et al. [2007] consider only a hash family for the Euclidean distance, and take advantage of the fact that adjacent buckets (which are in [b]â) differ in each coordinate by at most 1â this becomes clearer when we review the LSH family for Euclidean distance in Section 5.3.3. This scheme was shown empirically to reduce by an order of magnitude the total number of hash tables that is required to achieve an accuracy greater than 0.9 on high-dimensional datasets. | 2401.09350#173 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 174 | Another direction is to improve the guarantees of the LSH family itself. As Theorem 5.1 indicates, Ï = log p1/ log p2 plays a critical role in the efficiency and effectiveness of the search algorithm, as well as the space complexity of the data structure. It makes sense, then, that improving Ï leads to smaller space overhead. Many works have explored advanced LSH families to do just that [Andoni and Indyk, 2008, Andoni et al., 2014, 2015]. We review some of these methods in more detail later in this chapter.
# 5.2.2 Back to the Approximate Retrieval Problem
A solution to PLEB of Definition 5.2 is a solution to ϵ-approximate top-k retrieval only if r = δ(q, uâ), where uâ is the k-th minimizer of δ(q, ·). But we do not know the minimal distance in advance! That begs the question: How does solving the PLEB problem help us solve the ϵ-approximate retrieval problem?
Indyk and Motwani [1998] argue that an efficient solution to this decision version of the problem leads directly to an efficient solution to the original problem. In effect, they show that ϵ-approximate retrieval can be reduced to PLEB. Let us review one simple, albeit inefficient reduction. | 2401.09350#174 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 175 | Let δâ = maxu,vâX δ(u, v) and δâ = minu,vâX δ(u, v). Denote by â the aspect ratio: â = δâ/δâ. Now, define a set of distances R = {(1 + ϵ)0, (1 + ϵ)1, . . . , â}, and construct |R| LSH indices for each r â R.
Retrieving vectors for query q is a matter of performing binary search over R to find the minimal distance such that PLEB succeeds and returns a point u â X . That point u is the solution to the ϵ-approximate retrieval problem!
5.3 LSH Families
It is easy to see that such a reduction adds to the time complexity by a factor of O(log log1+ϵ â), and to the space complexity by a factor of O(log1+ϵ â).
# 5.3 LSH Families | 2401.09350#175 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 176 | # 5.3 LSH Families
We have studied how LSH solves the PLEB problem of Definition 5.2, ana- lyzed its time and space complexity, and reviewed how a solution to PLEB leads to a solution to the ϵ-approximate top-k retrieval problem of Defini- tion 1.2. Throughout that discussion, we took for granted the existence of an LSH family that satisfies Definition 5.1 for a distance function of interest. In this section, we review example families and unpack their construction to complete the picture.
# 5.3.1 Hamming Distance
We start with the simpler case of Hamming distance over the space of binary vectors. That is, we assume that X â {0, 1}d and δ(u, v) = â¥u â vâ¥1, measur- ing the number of coordinates in which the two vectors u and v differ. For this setup, a hash family that maps a vector to one of its coordinates at randomâ a technique that is also known as bit samplingâis an LSH family [Indyk and Motwani, 1998], as the claim below shows. | 2401.09350#176 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 177 | Theorem 5.2 For X â {0, 1}d equipped with the Hamming distance, the family H = {hi | hi(u) = ui, 1 ⤠i ⤠d} is (r, (1 + ϵ)r, 1 â r/d, 1 â (1 + ϵ)r/d)- sensitive.
Proof. The proof is trivial. For a given r and two vectors u,v ⬠{0,1}4, if lu â vl]; <r, then P [hi(u) # hi(v)] < r/d, so that P [he(u) = hi(v)] 1âr/d, and therefore pj = 1 â1/d. pz is derived similarly. Oo IV
# 5.3.2 Angular Distance
Consider next the angular distance between two real vectors u, v â Rd, de- fined as:
5(u, v) = arccos (ur), (5.1) Ilel2llella
63
64
5 Locality Sensitive Hashing
©, -)
(a) Hyperplane LSH (b) Cross-polytope LSH | 2401.09350#177 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 178 | Fig. 5.2: Illustration of hyperplane and cross-polytope LSH functions for an- gular distance in R2. In hyperplane LSH, we draw random directions (a and b) to define hyperplanes (A and B), and record +1 or â1 depending on which side of the hyperplane a vector (u and v) lies. For example, ha(u) = â1, ha(v) = +1, and hb(u) = hb(v) = â1. It is easy to see that the probability of a hash collision for two vectors u and v correlates with the angle between them. A cross-polytope LSH function, on the other hand, randomly rotates and normalizes (using matrix A or B) the vector (u), and records the clos- est standard basis vector as its hash. Note that, the cross-polytope is the L1 ball, which in R2 is a rotated square. As an example, hA(u) = âe1 and hB(u) = +e1.
# 5.3.2.1 Hyperplane LSH | 2401.09350#178 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 179 | # 5.3.2.1 Hyperplane LSH
For this distance function, one simple LSH family is the set of hash functions that project a vector onto a randomly chosen direction and record the sign of the projection. Put differently, a hash function in this family is characterized by a random hyperplane, which is in turn defined by a unit vector sampled uniformly at random. When applied to an input vector u, the function returns a binary value (from {â1, 1}) indicating on which side of the hyperplane u is located. This procedure, which is known as sign random projections or hy- perplane LSH [Charikar, 2002], is illustrated in Figure 5.2(a) and formalized in the following claim.
Theorem 5.3 For X â Rd equipped with the angular distance of Equa- tion (5.1), the family H = {hr | hr(u) = Sign(â¨r, uâ©), r â¼ Sdâ1} is (θ, (1 + ϵ)θ, 1 â θ/Ï, 1 â (1 + ϵ)θ/Ï)-sensitive for θ â [0, Ï], and Sdâ1 de- noting the d-dimensional hypersphere. | 2401.09350#179 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 180 | Proof. If the angle between two vectors is θ, then the probability that a randomly chosen hyperplane lies between them is θ/Ï. As such, the proba5.3 LSH Families
bility that they lie on the same side of the hyperplane is 1 â θ/Ï. The claim ââ follows.
# 5.3.2.2 Cross-polytope LSH
There are a number of other hash families for the angular distance in addition to the basic construction above. Spherical LSH [Andoni et al., 2014] is one example, albeit a purely theoretical oneâa single hash computation from that family alone is considerably more expensive than an exhaustive search over a million data points [Andoni et al., 2015]!
What is known as Cross-polytope LSH [Andoni et al., 2015, Terasawa and Tanaka, 2007] offers similar guarantees as the Spherical LSH but is a more practical construction. A function from this family randomly rotates an input vector first, then outputs the closest signed standard basis vector (eiâs for 1 ⤠i ⤠d) as the hash value. This is illustrated for R2 in Figure 5.2(b), and stated formally in the following result. | 2401.09350#180 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 181 | Theorem 5.4 For X â Sdâ1 equipped with the angular distance of Equa- tion (5.1) or equivalently the Euclidean distance, the following family consti- tutes an LSH:
H = {hR | hR(u) = arg min eâ{±ei}d i=1 â¥e â Ru â¥Ruâ¥2 â¥, R â RdÃd, Rij â¼ N (0, 1)},
where N (0, 1) is the standard Gaussian distribution. The probability of colli- sion for unit vectors u, v â Sdâ1 with â¥u â v⥠< Ï is:
2 Tinto = ha(v)| Ss Ind+ O,(Inind).
.
Importantly:
Ï = log p1 log p2 = 1 (1 + ϵ)2 4 â (1 + ϵ)2r2 4 â r2 + o(1). | 2401.09350#181 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 182 | Importantly:
Ï = log p1 log p2 = 1 (1 + ϵ)2 4 â (1 + ϵ)2r2 4 â r2 + o(1).
Proof. We wish to show that, for two unit vectors u, v â Sdâ1 with â¥u â v⥠< Ï , the expression above for the probability of a hash collision is correct. That, indeed, completes the proof of the theorem itself. To show that, we will take advantage of the spherical symmetry of Gaussian random variablesâwe used this property in the proof of Theorem 2.2.
By the spherical symmetry of Gaussians, without loss of generality, we can assume that u = e1, the first standard basis, and v = αe1 + βe2, where α2 + β2 = 1 (so that v has unit norm) and (α â 1)2 + β2 = Ï 2 (because the distance between u and v is Ï ).
Let us now model the collision probability as follows:
65
66
5 Locality Sensitive Hashing
(a) (b)
ax + By = aX1+ BY, ar + By =â(aXi+ Yi)
ME NSG+ 9x) (+eAn | 2401.09350#182 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 183 | (a) (b)
ax + By = aX1+ BY, ar + By =â(aXi+ Yi)
ME NSG+ 9x) (+eAn
Fig. 5.3: Illustration of the set SX1,Y1 = {|x| ⤠X1 ⧠|αx + βy| ⤠αX1 + βY1} in (a). Figure (b) visualizes the derivation of Equation (5.5).
P [h(u) h(v)| 2d = 2d P X,Y<N(0,D) = 2d E X1,Â¥i~N (0,1) P [h(u) h(v) ea Vi, [Xi] < MA laX; + BYi| < aX + 6%] d-1 P [IXa] < Xi AlaX2 + BY] < aX + 6Y1] | Ee ( 5. 2)
# P
The first equality is due again to the spherical symmetry of the hash functions and the fact that there are 2d signed standard basis vectors. The second equality simply uses the expressions for u = e1 and v = αe1 + βe2. The final equality follows because of the independence of the coordinates of X and Y , which are sampled from a d-dimensional isotropic Gaussian distribution. | 2401.09350#183 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 184 | The innermost term in Equation (5.2) is the Gaussian measure of the closed, convex set {|x| ⤠X1 ⧠|αx + βy| ⤠αX1 + βY1}, which is a bounded plane in R2. This set, which we denote by SX1,Y1, is illustrated in Fig- ure 5.3(a). Then we can expand Equation (5.2) as follows:
2d E X1,Yi~N(0,1) x, [Sx] *) 1 =2d P P[S >t |dt. | xaneveou | [Sx..m] 2 | (5.3) 5. (5.4)
We therefore need to expand P[SX1,Y1] in order to complete the expression above. The rest of the proof derives that quantity.
Step 1. Consider P[SX1,Y1 ] = G(SX1,Y1), which is the standard Gaus- sian measure of the set SX1,Y1. In effect, we are interested in G(S) for some bounded convex subset S â R2. We need the following lemma to derive an
5.3 LSH Families | 2401.09350#184 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 185 | 5.3 LSH Families
expression for G(S). But first define µA(r) as the Lebesgue measure of the intersection of a circle of radius r (Sr) with the set A, normalized by the circumference of Sr, so that 0 ⤠µA(r) ⤠1 is a probability measure:
µA(r) â µ(A â© Sr) 2Ïr ,
and denote by âA the distance from the origin to A (i.e., âA â inf{r > 0 | µA(r) > 0}).
# Lemma 5.1 For the closed set A â R2 with µA(r) non-decreasing:
sup (alr) 2) <glay <e-Â¥ r>0
Proof. The upper-bound can be derived as follows:
°° 72/2 ~~ 2 âA?,/2 ata) = | rua(r) Pars [ reâ (dr =e 4a/?, 0 Aa
For the lower-bound:
~ =r? /2 y ear? /2 ry .â (0)? /2 oa) = [ rua(r)-e" / ar > wateâ | reâ / dr = pa(r')e "1 ; 0 fa | 2401.09350#185 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 186 | for all rⲠ> 0. The inequality holds because µA(·) is non-decreasing.
Now, K â â SX1,Y1 is a convex set, so for its complement, K â R2, µK(·) is non-decreasing. Using the above lemma, that fact implies the following for small ϵ:
â
ϵ · eâ(1+ϵ)2â2 K /2) ⤠G(K) ⤠eââ2 K /2. â¦(
â
The lower-bound uses the fact that µK (1 + ϵ)âK = â¦( ϵ), because:
HK NS a4eax) = (1+ Ax arecos ( (1+ 0)Anve. (5.5) (a+ a
See Figure 5.3(b) for a helpful illustration.
Since we are interested in the measure of K â = SX1,Y1, we can apply the
result above directly to obtain:
1 â en A(wv)?/2 <P[Sxv]J<1- (ve: et ACua? 2), (5.6)
where we use the notation âK = â(u, v) = min{u, αu + βv}. | 2401.09350#186 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 187 | where we use the notation âK = â(u, v) = min{u, αu + βv}.
Step 2. For simplicity, first consider the side of Equation (5.6) that does not depend on ϵ, and substitute that into Equation (5.3). We obtain:
67
# Qo
68
5 Locality Sensitive Hashing
1 2a [ P [PISx..v] > emt] dt 0 -X1.Â¥1~N (0,1) 1 = 2a [ P [earn <1- ert|at 0 X1,.Â¥1~N(0,1) ~ = 2 Poon [A0%4,.%) > les (1-2) 2log (1 tm =) Jat. (5.7)
Step 3. We are left with bounding P[â(X1, Y1) ⥠θ]. â(X1, Y1) ⥠θ is, by definition, the set that is the intersection of two half-planes: X1 ⥠θ and αX1 + βY1 ⥠θ. If we denote this set by K, then we are again interested in the Gaussian measure of K. For small ϵ, we can apply the lemma above to show that: | 2401.09350#187 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 188 | ⦠ϵeâ(1+ϵ)2â2 K ⤠G(K) ⤠eââ2 K /2, (5.8)
where the constant factor in ⦠depends on the angle between the two half- planes. That is because µ(K â© S(1+ϵ)âK ) is ϵ times that angle.
It is easy to see that â2 K = 4 4âÏ 2 · θ2, so that we arrive at the following for small ϵ and every θ ⥠0:
4 ; ; Q, (ce 46) ae A(X, Yi) > 6| <eE =, (59) Je xrbvon! X1,Â¥i~N (0,1)
Step 4. Substituting Equation (5.9) into Equation (5.7) yields:
2a [ x yee.) [A(x Â¥1) 2 4/2 log (1 - tr) dt 1 4 =a | (t= ye" at 0 1 =2d(dâ1) [ G- 2) xt dt 0 8âT = 2d(dâ )B(7âS:d- 1) =2d0,(1)d = 7,
where B denotes the Beta function and the last step uses the Stirling ap- proximation. | 2401.09350#188 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 189 | where B denotes the Beta function and the last step uses the Stirling ap- proximation.
The result above can be expressed as follows:
ln 1 P[h(u) = h(v)] = Ï 2 4 â Ï 2 ln d ± OÏ (1).
Step 5. Repeating Steps 2 through 4 with the expressions that involve ϵ ââ
5.3 LSH Families
Finally, Andoni et al. [2015] show that, instead of applying a random rota- tion using Gaussian random variables, it is sufficient to use a pseudo-random rotation based on Fast Hadamard Transform. In effect, they replace the ran- dom Gaussian matrix R in the construction above with three consecutive applications of HD, where H is the Hadamard matrix and D is a random diagonal sign matrix (where the entries on the diagonal take values from {±1}).
# 5.3.3 Euclidean Distance
Datar et al. [2004] proposed the first LSH family for the Euclidean distance, δ(u, v) = â¥u â vâ¥2. Their construction relies on the notion of p-stable distri- butions which we define first. | 2401.09350#189 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 190 | Definition 5.3 (p-stable Distribution) A distribution D, is said to be p- stable if ean a;Z;, where a; ⬠R and Z; ~ Dy, has the same distribution as lol|pZ, where a = [a1,a2,...,,] and Z ~ Dy. As an example, the Gaussian distribution is 2-stable.
Let us state this property slightly differently so it is easier to understand its connection to LSH. Suppose we have an arbitrary vector u â Rd. If we con- struct a d-dimensional random vector α whose coordinates are independently sampled from a p-stable distribution Dp, then the inner product â¨Î±, uâ© is dis- tributed according to â¥uâ¥pZ where Z â¼ Dp. By linearity of inner product, we can also see that â¨Î±, uâ© â â¨Î±, vâ©, for two vectors u, v â Rd, is distributed as â¥u â vâ¥pZ. This particular fact plays an important role in the proof of the following result. | 2401.09350#190 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 191 | Theorem 5.5 For X â Rd equipped with the Euclidean distance, a 2-stable distribution D2, and the uniform distribution U over the interval [0, r], the following family is (r, (1 + ϵ)r, p(r), p((1 + ϵ)r))-sensitive:
H = {hα,β | hα,β(u) = â â¨Î±, uâ© + β r â, α â Rd, αi â¼ D2, β â¼ U [0, r]},
where:
veod= [YC jae
and f is the probability density function of the absolute value of D2.
Proof. The key to proving the claim is modeling the probability of a hash collision for two arbitrary vectors u and v: P . That event can be expressed as follows:
69
70
5 Locality Sensitive Hashing
P [#a.a(u) _ haa()| _p [je + 2) _ [oe + 4) r r =P||(a,u-0)| <raA Besta (a, u) + 8 and (a,v) + 8 do not straddle an integer et BO
|. | 2401.09350#191 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 192 | |.
Using the 2-stability of α, Event A is equivalent to â¥u â vâ¥2|Z| < r, where Z is drawn from D2. The probability of the complement of Event B is simply the ratio between â¨Î±, u â vâ© and r. Putting all that together, we obtain that:
P [ha,a(u) = he,alv)| = re f(A - sea whe) ge 2=0 r " 1 t t i fl (1 at, t=0 |luâ vo" â|luâ Ilo r
where we derived the last equality by the variable change t = zâ¥u â vâ¥2. Therefore, if â¥u â v⥠⤠x:
P [ha,s(u) = ha,s(v)] >/ =A (1- âYat = pix). t=0 ©
It is easy to complete the proof from here.
# 5.3.4 Inner Product
Many of the arguments that establish the existence of an LHS family for a distance function of interest rely on triangle inequality. Inner product as a measure of similarity, however, does not enjoy that property. As such, devel- oping an LSH family for inner product requires that we somehow transform the problem from MIPS to NN search or MCS search, as was the case in Chapter 4. | 2401.09350#192 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 193 | Finding the right transformation that results in improved hash qualityâas determined by Ïâis the question that has been explored by several works in the past [Neyshabur and Srebro, 2015, Shrivastava and Li, 2015, 2014, Yan et al., 2018].
Let us present a simple example. Note that, we may safely assume that queries are unit vectors (i.e., q â Sdâ1), because the norm of the query does not change the outcome of MIPS.
Now, define the transformation ¢g : R¢ + R¢*", first considered by Bachrach et al. [2014], as follows: ga(u) = [u, \/1 â |lul|3]. Apply this transformation to data points in Â¥. Clearly, ||¢a(u)||2 = 1 for all u ⬠4X. Separately, pad the query points with a single 0: ¢,(v) = [v;0] ⬠R41.
ââ
References | 2401.09350#193 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 194 | ââ
References
We can immediately verify that â¨q, uâ© = â¨Ïq(q), Ïd(u)â© for a query q and data point u. But by applying the transformations Ïd(·) and Ïq(·), we have reduced the problem to MCS! As such, we may use any of existing LSH families that we have seen for angular distance in Section 5.3.2 for MIPS.
There has been much debate over the suitability of the standard LSH framework for inner product, with some works extending the framework to what is known as asymmetric LSH [Shrivastava and Li, 2014, 2015]. It turns out, however, that none of that is necessary. In fact, as Neyshabur and Srebro [2015] argued formally and demonstrated empirically, the simple scheme we described above sufficiently addresses MIPS.
# 5.4 Closing Remarks
Much like branch-and-bound algorithms, an LSH approach to top-k retrieval rests on a solid theoretical foundation. There is a direct link between all that is developed theoretically and the accuracy of an LSH-based top-k retrieval system. | 2401.09350#194 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 195 | Like tree indices, too, the LSH literature is arguably mature. There is therefore not a great deal of open questions left to investigate in its founda- tion, with many recent works instead exploring learnt hash functions or its applications in other domains.
What remains open and exciting in the context of top-k retrieval, however, is the possibility of extending the theory of LSH to explain the success of other retrieval algorithms. We will return to this discussion in Chapter 7.
# References
A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. Communications of the ACM, 51(1): 117â122, 1 2008.
A. Andoni, P. Indyk, H. L. Nguyen, and I. Razenshteyn. Beyond locality- sensitive hashing. In Proceedings of the 2014 Annual ACM-SIAM Sympo- sium on Discrete Algorithms, pages 1018â1028, 2014.
A. Andoni, P. Indyk, T. Laarhoven, I. Razenshteyn, and L. Schmidt. Practical and optimal lsh for angular distance. In Proceedings of the 28th Interna- tional Conference on Neural Information Processing Systems - Volume 1, pages 1225â1233, 2015. | 2401.09350#195 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 196 | Y. Bachrach, Y. Finkelstein, R. Gilad-Bachrach, L. Katzir, N. Koenigstein, N. Nice, and U. Paquet. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. In Proceedings of the 8th ACM Conference on Recommender Systems, page 257â264, 2014.
71
72
5 Locality Sensitive Hashing
J. Buhler. Efficient large-scale sequence comparison by locality-sensitive hashing. Bioinformatics, 17(5):419â428, 05 2001.
M. S. Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings of the Thiry-Fourth Annual ACM Symposium on Theory of Computing, pages 380â388, 2002.
M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the 20th Annual Symposium on Computational Geometry, pages 253â262, 2004. A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In Proceedings of the 25th International Conference on Very Large Data Bases, pages 518â529, 1999. | 2401.09350#196 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 197 | P. Indyk and R. Motwani. Approximate nearest neighbors: Towards remov- ing the curse of dimensionality. In Proceedings of the 30th Annual ACM Symposium on Theory of Computing, pages 604â613, 1998.
Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li. Multi-probe lsh: Efficient indexing for high-dimensional similarity search. In Proceedings of the 33rd International Conference on Very Large Data Bases, pages 950â 961, 2007.
B. Neyshabur and N. Srebro. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, pages 1926â 1934, 2015.
R. Panigrahy. Entropy based nearest neighbor search in high dimensions. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, pages 1186â1195, 2006.
A. Shrivastava and P. Li. Asymmetric lsh (alsh) for sublinear time maxi- mum inner product search (mips). In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, pages 2321â2329, 2014. | 2401.09350#197 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 198 | Improved asymmetric locality sensitive hashing (alsh) for maximum inner product search (mips). In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, pages 812â821, 2015.
K. Terasawa and Y. Tanaka. Spherical lsh for approximate nearest neigh- bor search on unit hypersphere. In Proceedings of the 10th International Conference on Algorithms and Data Structures, pages 27â38, 2007.
X. Yan, J. Li, X. Dai, H. Chen, and J. Cheng. Norm-ranging lsh for maximum inner product search. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 2956â2965, 2018.
# Chapter 6 Graph Algorithms
Abstract We have seen two major classes of algorithms that approach the top-k retrieval problem in their own unique ways. One recursively partitions a vector collection to model its geometry, and the other hashes the vectors into predefined buckets to reduce the search space. Our next class of algorithms takes yet a different view of the question. At a high level, our third approach is to âwalkâ through a collection, hopping from one vector to another, where every hop gets us spatially closer to the optimal solution. This chapter reviews algorithms that use a graph data structure to implement that idea.
# 6.1 Intuition | 2401.09350#198 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 199 | # 6.1 Intuition
The most natural way to understand a spatial walk through a collection of vectors is by casting it as traversing a (directed) connected graph. As we will see, whether the graph is directed or not depends on the specific algorithm itself. But the graph must regardless be connected, so that there always exists at least one path between every pair of nodes. This ensures that we can walk through the graph no matter where we begin our traversal.
Let us write G(V, E) to refer to such a graph, whose set of vertices or nodes are denoted by V, and its set of edges by E. So, for u, v â V in a directed graph, if (u, v) â E, we may freely move from node u to node v. Hopping from v to u is not possible if (v, u) /â E. Because we often need to talk about the set of nodes that can be reached by a single hop from a node uâknown as the neighbors of uâwe give it a special symbol and define that set as follows: N (u) = {v | (u, v) â E}. | 2401.09350#199 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 200 | The idea behind the algorithms in this chapter is to construct a graph in the pre-processing phase and use that as an index of a vector collection for top-k retrieval. To do that, we must decide what is a node in the graph (i.e., define the set V), how nodes are linked to each other (E), and, importantly, what the search algorithm looks like.
73
74
6 Graph Algorithms
Solution Gury
Fig. 6.1: Illustration of the greedy traversal algorithm for finding the top-1 solution on an example (undirected) graph. The procedure enters the graph from an arbitrary âentryâ node. It then compares the distance of the node to query q with the distance of its neighbors to q, and either terminates if no neighbor is closer to q than the node itself, or advances to the closest neighbor. It repeats this procedure until the terminal condition is met. The research question in this chapter concerns the construction of the edge set: How do we construct a sparse graph which can be traversed greedily while providing guarantees on the (near-)optimality of the solution
The set of nodes V is easy to construct: Simply designate every vector in the collection X as a unique node in G, so that |X | = |V|. There should, therefore, be no ambiguity if we referred to a node as a vector. We use both terms interchangeably. | 2401.09350#200 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 201 | What properties should the edge set E have? To get a sense of what is required of the edge set, it would help to consider the search algorithm first. Suppose we are searching for the top-1 vector closest to query q, and assume that we are, at the moment, at an arbitrary node u in G.
From node u, we can have a look around and assess if any of our neighbors in N (u) is closer to q. By doing so, we find ourselves in one of two situations. Either we encounter no such neighbor, so that u has the smallest distance to q among its neighbors. If that happens, ideally, we want u to also have the smallest distance to q among all vectors. In other words, in an ideal graph, a local optimum coincides with the global optimum.
Alternatively, we may find one such neighbor v â N (u) for which δ(q, v) < δ(q, u) and v = arg minwâN (u) δ(q, w). In that case, the ideal graph is one where the following event takes place: If we moved from u to v, and repeated the process above in the context of N (v) and so on, we will ultimately arrive at a local optimum (which, by the previous condition, is the global optimum). Terminating the algorithm then would therefore give us the optimal solution to the top-1 retrieval problem. | 2401.09350#201 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 202 | Put differently, in an ideal graph, if moving from a node to any of its neighbors does not get us spatially closer to q, it is because the current node is the optimal solution to the top-1 retrieval problem for q.
6.1 Intuition
Algorithm 3: Greedy search algorithm for top-k retrieval over a graph index.
Input: Graph G = (V, E) over collection X with distance δ(·, ·); query point q; entry node s â V; retrieval depth k. Result: Exact top-k solution for q. 1: Q â {s} ; â· Q is a priority queue 2: while Q changed in the previous iteration do 3: 4: 5: 6: 7: 8: 9: end while 10: return Q
On a graph with that property, the procedure of starting from any node in the graph, hopping to a neighbor that is closer to q, and repeating this procedure until no such neighbor exists, gives the optimal solution. That procedure is the familiar best-first-search algorithm, which we illustrate on a toy graph in Figure 6.1. That will be our base search algorithm for top-1 retrieval. | 2401.09350#202 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 203 | Extending the search algorithm to top-k requires a minor modification to the procedure above. It begins by initializing a priority queue of size k. When we visit a new node, we add it to the queue if its distance with q is smaller than the minimum distance among the nodes already in the queue. We keep moving from a node in the queue to its neighbors until the queue stabilizes (i.e., no unseen neighbor of any of the nodes in the queue has a smaller distance to q). This is described in Algorithm 3.
Note that, assuming δ(·, ·) is proper, it is easy to see that the top-1 optimal- ity guarantee immediately implies top-k optimalityâyou should verify this claim as an exercise. It therefore suffices to state our requirements in terms of top-1 optimality alone. So, ideally, E should guarantee that traversing G in a best-first-search manner yields the optimal top-1 solution.
# 6.1.1 The Research Question
It is trivial to construct an edge set that provides the desired optimality guarantee: Simply add an edge between every pair of nodes, completing the graph! The greedy search algorithm described above will take us to the opti- mal solution.
However, such a graph not only has high space complexity, but it also has a linear query time complexity. That is because, the very first step (which
75
76
6 Graph Algorithms | 2401.09350#203 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 204 | However, such a graph not only has high space complexity, but it also has a linear query time complexity. That is because, the very first step (which
75
76
6 Graph Algorithms
also happens to be the last step) involves comparing the distance of q to the entry node, with the distance of q to every other node in the graph! We are better off exhaustively scanning the entire collection in a flat index.
The research question that prompted the algorithms we are about to study in this chapter is whether there exists a relatively sparse graph that has the optimality guarantee we seek or that can instead provide guarantees for the more relaxed, ϵ-approximate top-k retrieval problem.
As we will learn shortly, with a few notable exceptions, all constructions of E proposed thus far in the literature for high-dimensional vectors amount to heuristics that attempt to approximate a theoretical graph but come with no guarantees. In fact, in almost all cases, their worst-case complexity is no better than exhaustive search. Despite that, many of these heuristics work remarkably well in practice on real datasets, making graph-based methods one of the most widely adopted solutions to the approximate top-k retrieval problem. | 2401.09350#204 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 205 | In the remainder of this chapter, we will see classes of theoretical graphs that were developed in adjacent scientific disciplines, but that are seemingly suitable for the (approximate) top-k retrieval problem. As we introduce these graphs, we also examine representative algorithms that aim to build an ap- proximation of such graphs in high dimensions, and review their properties. We note, however, that the literature on graph-based methods is vast and growing still. There is a plethora of studies that experiment with (minor or major) adjustments to the basic idea described earlier, or that empirically compare and contrast different algorithmic flavors on real-world datasets. This chapter does not claim to, nor does it intend to cover the explosion of material on graph-based algorithms. Instead, it limits its scope to the founda- tional principles and ground-breaking works that are theoretically somewhat interesting. We refer the reader to existing reports and surveys for the full spectrum of works on this topic [Wang et al., 2021, Li et al., 2020].
# 6.2 The Delaunay Graph | 2401.09350#205 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 206 | # 6.2 The Delaunay Graph
One classical graph that satisfies the conditions we seek and guarantees the optimality of the solution obtained by best-first-search traversal is the De- launay graph [Delaunay, 1934, Fortune, 1997]. It is easier to understand the construction of the Delaunay graph if we consider instead its dual: the Voronoi diagram. So we begin with a description of the Voronoi diagram and Voronoi regions.
6.2 The Delaunay Graph
(a) Voronoi diagram (b) Delaunay graph | 2401.09350#206 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 207 | Fig. 6.2: Visualization of the Voronoi diagram (a) and its dual, the Delaunay graph (b) for an example collection X of points in R2. A Voronoi region associated with a point u (shown here as the area contained within the dashed lines) is a set of points whose nearest neighbor in X is u. The Delaunay graph is an undirected graph whose nodes are points in X and two nodes are connected (shown as solid lines) if their Voronoi regions have a non-empty intersection.
# 6.2.1 Voronoi Diagram
For the moment, suppose δ is the Euclidean distance and that we are in R2. Suppose further that we have a collection X of just two points u and v on the plane. Consider now the subset of R2 comprising of all the points to which u is the closest point from X . Similarity, we can identify the subset to which v is the closest point. These two subsets are, in fact, partitions of the plane and are separated by a lineâthe points on this line are equidistant to u and v. In other words, two points in R2 induce a partitioning of the plane where each partition is âownedâ by a point and describes the set of points that are closer to it than they are to the other point. | 2401.09350#207 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 208 | We can trivially generalize that notion to more than two points and, in- deed, to higher dimensions. A collection ¥ of points in R¢ partitions the space into unique regions R = Uc Ru; where the region R,, is owned by point u ⬠¥ and represents the set of points to which wu is the closest point in X. Formally, Ry = {x | u = arg min, 0(a,v)}. Note that, each region is a convex polytope that is the intersection of half-spaces. The set of regions is known as the Voronoi diagram for the collection ¥ and is illustrated in Figure 6.2(a) for an example collection in R?.
77
78
6 Graph Algorithms
# 6.2.2 Delaunay Graph
The Delaunay graph for X is, in effect, a graph representation of its Voronoi diagram. The nodes of the graph are trivially the points in X , as before. We place an edge between two nodes u and v if their Voronoi regions have a non-empty intersection: Ru â© Rv ̸= â
. Clearly, by construction, this graph is undirected. An example of this graph is rendered in Figure 6.2(b). | 2401.09350#208 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 209 | There is an important technical detail that is worth noting. The Delaunay graph for a collection X is unique if the points in X are in general posi- tion [Fortune, 1997]. A collection of points are said to be in general position if the following two conditions are satisfied. First, no n points from X â Rd, for 2 ⤠n ⤠d + 1, must lie on a (n â 2)-flat. Second, no n + 1 points must lie on any (nâ2)-dimensional hypersphere. In R2, as an example, for a collection of points to be in general position, no three points may be co-linear, and no four points co-circular.
We must add that, the detail above is generally satisfied in practice. Im- portantly, if the vectors in our collection are independent and identically distributed, then the collection is almost surely in general position. That is why we often take that technicality for granted. So from now on, we assume that the Delaunay graph of a collection of points is unique.
# 6.2.3 Top-1 Retrieval | 2401.09350#209 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 210 | # 6.2.3 Top-1 Retrieval
We can immediately recognize the importance of Voronoi regions: They ge- ometrically capture the set of queries for which a point from the collection is the solution to the top-1 retrieval problem. But what is the significance of the dual representation of this geometrical concept? How does the Delaunay graph help us solve the top-1 retrieval problem?
For one, the Delaunay graph is a compact representation of the Voronoi diagram. Instead of describing polytopes, we need only to record edges be- tween neighboring nodes. But, more crucially, as the following claim shows, we can traverse the Delaunay graph greedily, and reach the optimal top-1 solution from any node. In other words, the Delaunay graph has the desired property we described in Section 6.1.
Theorem 6.1 Let G = (V, E) be a graph that contains the Delaunay graph of m vectors X â Rd. The best-first-search algorithm over G gives the optimal solution to the top-1 retrieval problem for any arbitrary query q if δ(·, ·) is proper.
The proof of the result above relies on an important property of the De- launay graph, which we state first.
6.2 The Delaunay Graph | 2401.09350#210 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 211 | Fig. 6.3: Illustration of the second case in the proof of Lemma 6.1.
Lemma 6.1 Let G = (V, E) be the Delaunay graph of a collection of points X â Rd, and let B be a ball centered at µ that contains two points u, v â X , with radius r = min(δ(µ, u), δ(µ, v)), for a continuous and proper distance function δ(·, ·). Then either (u, v) â E or there exists a third point in X that is contained in B.
Proof. Suppose there is no other point in X that is contained in B. We must show that, in that case, (u, v) â E.
There are two cases. The first and easy case is when u and v are on the surface of B. Clearly, u and v are equidistant from µ. Because there are no other points in B, we can conclude that µ lies in the intersection of Ru and Rv, the Voronoi regions associated with u and v. That implies (u, v) â E. | 2401.09350#211 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |