diff --git a/-NAyT4oBgHgl3EQfRPZI/content/tmp_files/2301.00061v1.pdf.txt b/-NAyT4oBgHgl3EQfRPZI/content/tmp_files/2301.00061v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..701cbc10883d15a1dadcb5867a16bf514ebc5086 --- /dev/null +++ b/-NAyT4oBgHgl3EQfRPZI/content/tmp_files/2301.00061v1.pdf.txt @@ -0,0 +1,3077 @@ +A Global Optimization Algorithm for K-Center +Clustering of One Billion Samples +Jiayang Ren1, Ningning You2, Kaixun Hua1, Chaojie Ji3, Yankai Cao1 +1Department of Chemical and Biological Engineering, University of British Columbia, Vancouver, BC, Canada, +rjy12307@mail.ubc.ca, kaixun.hua@ubc.ca, yankai.cao@ubc.ca +2Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai, China, +ningyou@sjtu.edu.cn +3Department of Mathematics, University of British Columbia, Vancouver, BC, Canada, +chaojiej@math.ubc.ca +This paper presents a practical global optimization algorithm for the K-center clustering problem, which +aims to select K samples as the cluster centers to minimize the maximum within-cluster distance. This +algorithm is based on a reduced-space branch and bound scheme and guarantees convergence to the global +optimum in a finite number of steps by only branching on the regions of centers. To improve efficiency, +we have designed a two-stage decomposable lower bound, the solution of which can be derived in a closed +form. In addition, we also propose several acceleration techniques to narrow down the region of centers, +including bounds tightening, sample reduction, and parallelization. Extensive studies on synthetic and real- +world datasets have demonstrated that our algorithm can solve the K-center problems to global optimal +within 4 hours for ten million samples in the serial mode and one billion samples in the parallel +mode. Moreover, compared with the state-of-the-art heuristic methods, the global optimum obtained by our +algorithm can averagely reduce the objective function by 25.8% on all the synthetic and real-world datasets. +Key words : global optimization; K-center clustering; branch and bound; two-stage decomposition; bounds +tightening +1. Introduction +Cluster analysis is a task to group similar samples into the same cluster while separating less +similar samples into different clusters. It is a fundamental unsupervised machine learning task that +explores the character of datasets without the need to annotate cluster classes. Clustering plays +a vital role in various fields, such as data summarization (Kleindessner et al. 2019, Hesabi et al. +1 +arXiv:2301.00061v1 [math.OC] 30 Dec 2022 + +Ren et al.: Global Optimization for K-Center of One Billion Samples +2 +2015), customer grouping (Aggarwal et al. 2004), facility location determination (Hansen et al. +2009), and etc. +There are several typical cluster models, including connectivity-based models, centroid-based +models, distribution-based models, density-based models, etc. This work focuses on one of the fun- +damental centroid-based clustering models called the K-center problem. The goal of the K-center +problem is to minimize the maximum within-cluster distance +(Kaufman and Rousseeuw 2009). +Specifically, given a dataset with S samples and the desired number of clusters K, the K-center +problem aims to select K samples from the dataset as centers and to minimize the maximum +distance from other samples to its closest center. The K-center problem is a combinatorial opti- +mization problem that has been widely studied in theoretical computer science (Lim et al. 2005). +Moreover, it has been intensively explored as a symmetric and uncapacitated case of the p-center +facility location problem in operations research and management science (Garcia-Diaz et al. 2019), +where the number of facilities corresponds to the variable k in a standard K-center problem. +Formally, provided a K, the objective function of K-center problem can be formulated as follows: +min +µ∈X max +s∈S min +k∈K ||xs − µk||2 +2 +(1) +where X = {x1,...,xS} is the dataset with S samples and A attributes, in which xs = [xs,1,...,xs,A] ∈ +RA is the sth sample and xs,a is the ath attribute of ith sample, s ∈ S := {1,··· ,S} is the index set +of samples. As to the variables related to clusters, k ∈ K := {1,··· ,K} is the index set of clusters, +µ := {µ1,··· ,µK} represents the center set of clusters, µk = [µk +1,...,µk +A] ∈ RA is the center of kth +cluster. Here, µ are the variables to be determined in this problem. We use µ ∈ X to denote the +“centers on samples” constraint in which each cluster’s center is restricted to the existing samples. +1.1. Literature Review +The K-center problem has been shown to be NP-hard (Gonzalez 1985), which means that it is +unlikely to find an optimal solution in polynomial time unless P = NP (Garey and Johnson 1979). +As a remedy, heuristic algorithms, which aim to find a good but not necessarily optimal solution, + +Ren et al.: Global Optimization for K-Center of One Billion Samples +3 +are often used to solve the K-center problem on large-scale datasets. The study of exact algorithms, +which provide an optimal solution but may hardly be terminated in an acceptable time, is restricted +to small-scale datasets due to this poor scalability on larger datasets. +Regarding heuristic algorithms, there are several 2-approximation algorithms that provide a +theoretical guarantee of their distance from the optimal solution for the K-center problem, but do +not provide a guarantee on their running time (Plesn´ık 1987, Gonzalez 1985, Dyer and Frieze 1985, +Hochbaum and Shmoys 1985, Cook et al. 1995). Among these 2-approximation algorithms, Furthest +Point First (FPF) algorithm proposed by Gonzalez (1985) is known to be the fastest in practice +(Miheliˇc and Robic 2005). It works by starting with a randomly selected center and then adding +points that are farthest from the existing centers to the center set. Despite their solution quality +guarantee, these 2-approximation algorithms may not always provide close-to-optimal solutions in +practice (Garcia-Diaz et al. 2019). Another kind of heuristic methods with a polynomial running +time but a weaker solution quality guarantee is also intensively studied in the literature (Miheliˇc +and Robic 2005, Garcia-Diaz et al. 2017). Besides heuristic methods, there are also metaheuristic +methods that do not have a polynomial running time or a solution quality guarantee, but have +been shown to provide near-optimal solutions in some cases (Mladenovi´c et al. 2003, Pullan 2008, +Davidovi´c et al. 2011). In sum, none of these algorithms can deterministically guarantee a global +optimal solution for the K-center problem. +In contrast to the numerous heuristic algorithms, the study of exact algorithms, which provide +the optimal solution but no solution time guarantee, is still struggling with small-scale problems +(e.g., thousands of samples). Early exact works are inspired by the relationship between K-center +and set-covering problems (Minieka 1970). Daskin (2000) transferred the K-center problem to a +maximal covering problem, in which the number of covered samples by K centers is maximized. +Then, they proposed an iterative binary search scheme to accelerate the solving procedure. Ilhan +and Pinar (2001) considered iteratively setting a maximum distance and validating if it can cover +all the samples. Elloumi et al. (2004) designed a new integer linear programming formulation of + +Ren et al.: Global Optimization for K-Center of One Billion Samples +4 +the K-center problem, then solved this new formulation by leveraging the binary search scheme +and linear programming relaxation. These algorithms have been shown to provide practical results +on small-scale datasets with up to 1,817 samples. +Another research direction models the K-center problem as a Mixed Integer Programming (MIP) +formulation, allowing for the use of the branch and bound technique to find an optimal solution. +However, the vanilla implementations of the branch and bound technique are confined to small-scale +datasets with fewer than 250 samples (Brusco and Stahl 2005). Hence, constraint programming is +introduced to address the larger scale K-center problems. Dao et al. (2013) designed two sets of +variables describing the cluster centers and sample belongings, then updated the solution through +constraint propagation and branching. They further reduced the sets of variables and proposed a +more general framework in Duong et al. (2017). By involving constraint programming, their works +can solve the datasets with up to 5,000 samples. +Recently, researchers have explored iterative techniques to solve the K-center problem on large +datasets by breaking it down into smaller subproblems, such as iterative sampling (Aloise and +Contardo 2018) and row generation (Contardo et al. 2019). In Aloise and Contardo (2018), a +sampling-based algorithm was proposed that alternates between an exact procedure on a small +subset of the data and a heuristic procedure to test the optimality of the current solution. This +algorithm is capable to solve a dataset containing 581,012 samples within 4 hours. However, a report +about the optimality gap is absent, which is an important measure of solution quality. According to +that computing the covering set for a subset of all samples is cheaper than all (Chen and Handler +1987, Chen and Chen 2009), the same research group proposed a row generation algorithm that +relies on computing a much smaller sub-matrix (Contardo et al. 2019). This approach is able to +solve a dataset with 1 million samples to a 6% gap in 9 hours. However, neither of these methods +provides a finite-step convergence guarantee, which results in that they may not always converge +to an arbitrarily small gap within a finite number of steps. Therefore, these methods can lead to a +nontrivial optimality gap, especially for large datasets. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +5 +1.2. Main contributions +Recently, Cao and Zavala (2019) proposed a reduced-space spatial branch and bound (BB) scheme +for two-stage stochastic nonlinear programs. Hua et al. (2021) adopted this reduced-space BB +scheme and Lagrangian decomposition to solve the K-means clustering problem with a global +optimal guarantee. They solve the large-scale K-means problems up to 210,000 samples to 2.6% +optimality gap within 4 hours. However, these works can not be directly applied to the K-center +problem. The challenge is that the K-center problem minimizes the maximum within-cluster dis- +tance instead of the average within-cluster distance. Therefore, utilizing the Lagrangian decom- +position method to compute the lower bound is impossible. Moreover, because of the “centers on +samples” constraint in the K-center problem, the direct application of Hua’s algorithm will lead +to infeasible solutions. +To address these challenges, we propose a tailored reduced-space branch and bound algorithm +for the K-center problem. We also design several bounds tightening (BT) and sample reduction +methods to accelerate the BB procedure. Our algorithm is unique in that it only branches on the +region of centers, which allows us to guarantee convergence to the global optimum within a finite +number of steps. In contrast, traditional branch and bound algorithms must branch on all integer +variables, which can become computationally infeasible for large-scale problems. By focusing on +the limited region of centers, our algorithm is capable to solve even large-scale K-center problems. +Specifically, the main contributions of this paper are as follows: +• We propose an exact global optimization algorithm based on a tailored reduced-space branch +and bound scheme for the K-center problem. To increase efficiency, we develop a two-stage decom- +posable lower bounding method with a closed-form solution, eliminating the need for using any +MIP solver in the optimization process. Moreover, the convergence of our algorithm to the global +optimum is guaranteed by branching only on the region of centers. +• We demonstrate that the assignment of clusters can be determined for many samples without +knowing the optimal solution. Based on this characteristic, we propose several bounds tightening + +Ren et al.: Global Optimization for K-Center of One Billion Samples +6 +and sample reduction techniques to further reduce the search space and accelerate the solving +procedure. Moreover, we also implement a sample-level parallelization strategy to fully utilize +computational resources. +• An open-source Julia implementation of the algorithm is provided. Extensive studies on 5 +synthetic and 33 real-world datasets have demonstrated that we can obtain the global solution for +datasets with up to 1 billion samples and 12 features, a feat that has not been achieved so far. +Especially, compared with the heuristic methods, the global optimum obtained by our algorithm +can averagely reduce the objective function by 25.8% on all the synthetic and real-world datasets. +This paper is an expanded version of our proceeding publication (Shi et al. 2022) that includes one +new acceleration technique called sample reduction and a parallel implementation. These improve- +ments have significantly increased the scale of the optimally solvable K-center problem from 14 +million samples to 1 billion. In this version, we provide more detailed proof of the global opti- +mum convergence of our algorithm. In addition, we have designed more comprehensive numerical +experiments on a broader range of datasets and parameters. +1.3. Outline +This paper is organized as follows: Section 2 introduces a two-stage formulation and a Mixed Integer +Nonlinear Programming (MINLP) formulation for the K-center problem. Section 3 presents the +details of the reduced-space branch and bound algorithm, including the lower bound, upper bound +methods, and convergence analysis. Section 4 discusses the accelerating techniques for our BB +algorithm, including bounds tightening, sample reduction, and parallel implementation techniques. +Section 5 presents the detailed proof of convergence to the global optimum in the finite steps. +Section 6 gives extensive numerical results compared with other algorithms. Finally, Section 7 +concludes the paper. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +7 +2. K-center Formulation +2.1. Two-stage Formulation +To introduce the lower bounding method in the branch and bound scheme, we first propose a +two-stage optimization form of the K-center Problem 1. The first-stage problem is as follows: +z = +min +µ∈X∩M0 max +s∈S Qs(µ). +(2) +where the center set µ is the so-called first-stage variable, Qs(µ) is the optimal value of the second- +stage optimization problem: +Qs(µ) = min +k∈K ||xs − µk||2 +2 +(3) +We denote a closed set M0 := {µ | µ ≤ µ ≤ ¯µ} as the region of centers, where µ is the lower bound of +centers and ¯µ is the upper bound, i.e., µk +a = min +s∈S Xs,a, ¯µk +a = max +s∈S Xs,a, ∀k ∈ K, a ∈ {1,··· ,A}. Here, +the constraint µ ∈ M0 is introduced to simplify the discussion of the BB scheme. Since M0 can be +inferred directly from data, it will not affect the optimal solution of Problem 1. Constraint µ ∈ +X ∩ M0 means the center of each cluster is selected from the samples belonging to the intersection +set of the corresponding region M0 and the dataset X +2.2. MINLP Formulation +To introduce the bounds tightening and sample reduction methods, we propose a MINLP formu- +lation of the K-center Problem 1: +min +µ,d,b,λ d∗ +(4a) +s.t. dk +s ≥ ||xs − µk||2 +2 +(4b) +− N1(1 − bk +s) ≤ d∗ +s − dk +s ≤ 0 +(4c) +d∗ ≥ d∗ +s +(4d) +� +k∈K +bk +s = 1 +(4e) +bk +s ∈ {0,1} +(4f) + +Ren et al.: Global Optimization for K-Center of One Billion Samples +8 +− N2(1 − λk +s) ≤ xs − µk ≤ N2(1 − λk +s) +(4g) +� +s∈S +λk +s = 1 +(4h) +λk +s ∈ {0,1} +(4i) +bk +s ≥ λk +s +(4j) +s ∈ S,k ∈ K +(4k) +where dk +s represents the distance between sample xs and center µk, d∗ +s denotes the distance between +xs and the center of its cluster, N1 and N2 are both arbitrary large values. bk +s and λk +s are two binary +variables. bk +s is equal to 1 if sample xs belongs to the Kth cluster, and 0 otherwise. λk +s is equal to +1 if xs is the center of the Kth cluster µk, and 0 otherwise. +Constraint 4c is a big M formulation and ensures that d∗ +s = dk +s if bk +s = 1 and d∗ +s ≤ dk +s otherwise. +Constraint 4e guarantees that sample xs belongs to one cluster. We also adopt Constraint 4g, 4h +and 4j to represent the “centers on samples” constraints, µ ∈ X. Specifically, Constraint 4g uses a +big M formula to make sure that µk = xs if λk +s = 1 and Constraint 4h confirms that each center can +only be selected on one sample. Constraint 4j ensures that if xs is the center of the Kth cluster, +then it is assigned to the Kth cluster. It should be noted that the global optimizer CPLEX also +relies on this formulation to solve the K-center problem. +3. Tailored Reduced-space Branch and Bound Scheme +This section introduces a tailored reduced-space branch and bound algorithm for the K-center +problem with lower and upper bounding methods. +3.1. Lower Bounds +In this section, we adopt the two-stage formulation and derive a closed-form solution to obtain the +lower bound of the K-center Problem 1. +At each node in the BB procedure, we deal with a subset of M0, which is denoted as M, and +solve the following problem concerning M: +z(M) = min +µ∈X∩M max +s∈S Qs(µ) +(5) + +Ren et al.: Global Optimization for K-Center of One Billion Samples +9 +This problem can be equivalently reformulated as the following problem by duplicating µ across +samples and enforcing them to be equal: +min +µs∈X∩M max +s∈S Qs(µs) +(6a) +s.t. +µs = µs+1,s ∈ {1,··· ,S − 1} +(6b) +We call constraints 6b the non-anticipativity constraints. By removing the “centers on samples” +constraint µ ∈ X and the non-anticipativity constraints 6b, we attain a lower bound formulation +as follow: +β(M) := min +µs∈M max +s∈S Qs(µs). +(7) +With constraints relaxed, the feasible region of Problem 7 is a superset of Problem 6’s feasible +region. Therefore, it is obvious that β(M) ≤ z(M). +In Problem 7, since µ of each sample is independent, it is obvious that: +β(M) = max +s∈S min +µs∈M Qs(µs). +(8) +Clearly, problem 8 can be decomposed into S subproblems with β(M) = max +s∈S βs(M): +βs(M) = min +µ∈M Qs(µ). +(9) +Denote the region of kth cluster’s center as M k := {µk : µk ≤ µk ≤ ¯µk} where µk and ¯µk are the +lower and upper bound of µk respectively. Since Qs(µ) = min +k∈K ||xs − µk||2 +2, we have +βs(M) = min +k∈K min +µk∈Mk ||xs − µk||2 +2, +(10) +which can be further decomposed into K subsubproblems with βs(M)=min +k∈K βk +s (M k): +βk +s (M k) = min +µk∈Mk ||xs − µk||2 +2. +(11) +The analytical solution to Problem 11 is: µk +a +∗ = mid{µk +a, xs,a, ¯µk +a},∀a ∈ {1,··· ,A}. Consequently, +the closed-form solution to Problem 7 can be easily computed by the max-min operation on all the +samples. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +10 +3.2. Upper Bounds +At each node in the BB procedure, the upper bounds of Problem 5 can be obtained by fixing the +centers at a candidate feasible solution ˆµ ∈ X ∩ M. In this way, we can compute the upper bound +base on the following equation: +α(M) = max +s∈S min +k∈K ||xs − ˆµk||2 +2 +(12) +Since ˆµ is a feasible solution, we have z(M) ≤ α(M), ∀ˆµ ∈ X ∩ M. In our implementation, we +use two methods to obtain the candidate feasible solutions. At the root node, we use a heuristic +method called Farthest First Traversal (Gonzalez 1985) to obtain a candidate solution ˆµ ∈ X ∩M0. +Using this method, we randomly pick an initial point and select each following point as far as +possible from the previously selected points. Algorithm 2 describes the details of the farthest first +traversal, where d(xs,T) represents the minimum distance from sample xs to any sample in set T. +We use FFT(M0) to denote the upper bound obtained using this approach. At a child node with +center region M, for each cluster, we select the data sample closest to the middle point of M k as +ˆµk, and obtain the corresponding upper bound α(M). +3.3. Branching +Our algorithm only needs to branch on the region of centers, M := {µ : µ ≤ µ ≤ ¯µ}, to guarantee +convergence, which would be theoretically discussed in Section 5, o. Since the desired number of +clusters is K and the number of attributes is A, the number of possible branching variables is K ×A. +The selection of branching variables and values will dramatically influence the BB procedure’s +efficiency. In our implementation, we select the max-range variable at each node as the branching +variable and the midpoint of this variable as the branching value. +3.4. Branch and Bound Scheme +The detailed reduced-space branch and bound algorithm for the K-center Problem 1 are given in +the Algorithm 1. In the algorithm, We use relint(.) to denote the relative interior of a set. We can + +Ren et al.: Global Optimization for K-Center of One Billion Samples +11 +also establish the convergence of the branch-and-bound scheme in Algorithm 1. The BB procedure +can generate a monotonically non-ascending sequence {αi} and a monotonically non-descending +sequence {βi}. We can show that they both converge to z in a finite number of steps. +Theorem 1. Algorithm 1 is convergent to the global optimal solution after a finite step L, with +βL = z = αL, by only branching on the region of centers. +Since the following acceleration techniques also influence the global convergence in Section 4. We +present the detailed proof of Theorem 1 in Section 5 after introducing the acceleration techniques. +Algorithm 1 Branch and Bound Scheme +Initialization +Initialize the iteration index i ← 0; +Set M ← {M0}, and tolerance ϵ > 0; +Compute initial lower and upper bounds βi = β(M0), αi = +FFT(M0) // Alg. 2 ; +Select K farthest initial seeds // Sec.4.1.1; +while M ̸= ∅ do +Node Selection +Select a set M satisfying β(M) = βi from M and delete it +from M; +Update i ← i + 1; +Bounds Tightening +Cluster Assignment // Alg. 3; +Bounds Tightening // Alg. 4; +Obtain the tightened node ˆ +M; +If i % isr = 0, Sample Reduction // Alg. 5; +if ∃|X ∩ M k| > 1,k ∈ K then +Branching +Find two subsets M1 +and M2 +s.t. relint(M1) ∩ +relint(M2) = ∅ and M1 ∪ M2 = M; +Update M ← M∪{Mi}, if X ∩M k +i ̸= ∅,∀k ∈ K,i ∈ 1,2; +end if +Bounding +Compute upper and lower bound α(M1), β(M1), α(M2), +β(M2); +Let βi ← min{β(M ′) | M ′ ∈ M}; +Let αi ← min{αi−1,α(M1),α(M2)}; +Remove all M ′ from M if β(M ′) ≥ αi; +If βi − αi ≤ ϵ, STOP; +end while +Algorithm 2 Farthest First Traversal +Initialization +Randomly pick s ∈ S; +Denote T as the set of K points selected by farthest first +traversal; +Set T ← {xs}; +while |T| < K do +Compute xs ∈ arg max +xs∈X d(xs,T) to find xs which is the +farthest away from set T; +T ← T ∪ {xs}; +end while +Algorithm 3 Cluster Assignment +Center Based Assignment +for sample xs ∈ X do +if bk +s == 0,∀k ∈ K then +if βk +s (M k) > α,∀k ∈ K \ {k′} then +xs is assigned to cluster k′ with bk′ +s = 1; +end if +end if +end for +Sample Based Assignment +if All clusters have at least one sample assigned then +for sample xs ∈ X do +if ∀k ∈ K \ {k′}, ∃ xj assigned to kth cluster, ||xs − +xj||2 +2 > 4α then +xs is assigned to cluster k′ with bk′ +s = 1. +end if +end for +end if +Algorithm 4 Bounds Tightening +Given the current center region M and upper bound α +for Cluster k ∈ K do +Obtain the assigned sample set J k using Alg.3; +Compute the ball-based or box-boxed area of each +assigned sample, Bα(xj) or Rα(xj); +Tighten the center region by M k ∩Bα(xj) or M k ∩Rα(xj) +, ∀j ∈ J k; +Further tighten according to the “centers on samples” +constraint; +end for +Algorithm 5 Sample Reduction +Initialize the index set of redundant samples as R ← S +for all BB nodes do +Obtain the index set of redundant samples for lower +bounds, RLB, according to the criterion in Sec. 4.2.1; +Obtain the index set of redundant samples for upper +bounds, RUB, according to the criterion in Sec. 4.2.2; +Update the redundant index set, R ← R ∩ RLB ∩ RUB; +end for +Delete samples in the redundant set R from the current +dataset. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +12 +4. Acceleration Techniques +Although the lower bound introduced in Section 3.1 is enough to guarantee convergence, it might +not be very tight, leading to tremendous iterations. Therefore, we propose several acceleration +techniques to reduce the search space and speed up the BB procedure. Since Algorithm 1 only +branches on the region of centers M := {µ : µ ≤ µ ≤ ¯µ}, we focus on reducing the region of centers +to accelerate the solution process while not excluding the optimal solution of the original K-center +problem. +4.1. Bounds Tightening Techniques +In each node, the assignment of many samples (i.e., which cluster the sample is assigned to) can be +pre-determined by the geometrical relationship of samples and regions of centers. This information +can be further used to reduce the region of µ. +4.1.1. Cluster Assignment +The task of cluster assignment is to pre-determine some values +of bk +s in the MINLP Formulation 4 at each BB node before finding the global optimal solution. +We first demonstrate the relations between samples and centers. Denote α as the upper bound +obtained using methods described in Section 3.2. Then based on Objective 4a and Constraint 4d, +we have d∗ +s ≤ d∗ ≤ α. From Constraint 4b and 4c, we can conclude that if bk +s = 1, then ||xs −µk||2 +2 ≤ +d∗ +s ≤ α. Therefore, we can derive Lemma 1: +Lemma 1. If sample xs is in the kth cluster, then ||xs − µk||2 +2 ≤ α, where α is an upper bound of +the K-center problem. +Besides the relation between samples and centers, cluster assignments may also be determined +from the distance of two samples. Suppose sample xi and xj belong to the kth cluster, then from +Lemma 1 we have ||xi − µk||2 +2 ≤ α and ||xj − µk||2 +2 ≤ α. Thus ||xi − xj||2 +2 = ||xi − µk + µk − xj||2 +2 ≤ +(||xi − µk||2 + ||µk − xj||2)2 ≤ 4α. Therefore, we have Lemma 2: +Lemma 2. If two samples xi and xj are in the same cluster, then ||xi − xj||2 +2 ≤ 4α where α is an +upper bound of the K-center problem. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +13 +We propose three methods for pre-assigning samples based on these two Lemmas: +K Farthest Initial Seeds: From Lemma 2, if ||xi − xj||2 +2 > 4α, then xi and xj are not in the +same cluster. At the root node, if we can find K samples with the distance between any two of +these samples xi and xj satisfying ||xi − xj||2 +2 > 4α, then we can conclude that these K samples +must belong to K distinct clusters. Figure 1 shows an example of this property, in which three +samples are pre-assigned to 3 distinct clusters. We call these K points initial seeds. To find the +initial seeds, every two samples must be as far as possible. Therefore, in our implementation, we use +the heuristic Farthest First Traversal (FFT) (Algorithm 2) to obtain K farthest points. For about +half of the case studies shown in Section 6, we can obtain the initial seeds using FFT. However, for +other cases, initial seeds can not be obtained using FFT, or the initial seeds may not even exist. +Center-Based Assignment: From Lemma 1, if ||xs − µk||2 +2 > α, then xs does not belong to +kth cluster, which is bk +s = 0. Consequently, if we can determine that bk +s = 0,∀k ∈ K \ {k′}, then +bk′ +s = 1. However, the value of µ here is unknown before obtaining the optimal solution. One +observation is that if the BB node with region M contains the optimal solution, then we have +βk +s (M k) = min +µk∈Mk ||xs − µk||2 +2 ≤ ||xs − µk||2 +2. Therefore, if βk +s (M k) > α, sample xs is not in the kth +cluster and bk +s = 0. In summary, for sample xs, if ∀k ∈ K \ {k′}, βk +s (M k) > α, then xs is assigned to +cluster k′ with bk′ +s = 1. Figure 2 illustrates an example in two-dimensional space with three clusters. +This center-based method can be adopted at every node of the BB scheme. Since βk +s (M k) is +already obtained when computing the lower bound in Section 4.2.1, there is no additional compu- +tational cost. Nevertheless, we do not need to apply this method at the root node since M 1 +0 = ··· = +M K +0 . As the BB scheme continues branching on the regions of centers, M k becomes more and more +different from others. Then more samples can be pre-assigned using this center-based method. +Sample-Based Assignment: Besides utilizing centers to pre-assign samples, assigned samples +can also help pre-assign other samples. From Lemma 2, if ||xi −xj||2 +2 > 4α, then xi and xj are not in +the same cluster. If xj belongs to kth cluster, then obviously xi cannot be assigned to kthe cluster +and bk +i = 0. With this relationship, if all the other K − 1 clusters are excluded, xi will be assigned +to the remaining cluster. Figure 3 shows an example of the sample-based assignment. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +14 +There is a prerequisite to using this sample-based method. For each cluster, there must be at least +one sample already assigned to the cluster. Based on this prerequisite, sample-based assignment is +utilized only after at least one sample is pre-assigned for each cluster. +4.1.2. Bounds Tightening +In this subsection, we adopt the Bounds Tightening (BT) tech- +nique and the cluster assignment information to reduce the region of µ. +Ball-based Bounds Tightening: For a sample j, Bα(xj)={x| ||x − xj||2 +2 ≤ α} represents the +ball with center xj and radius √α. By using cluster assignment methods in Section 4.1.1, assuming +that sample j belongs to kth cluster is already known, by Lemma 1, then µk ∈ Bα(xj) holds. We +use J k to denote the index of all samples assigned to kth cluster, i.e., J k = {j ∈ S | bk +j = 1}, +then µk ∈ Bα(xj),∀j ∈ J k. Besides this, we also know that µk ∈ X ∩ M k. Denote Sk ++ as the index +set of samples satisfying all these constraints, Sk ++(M) := {s ∈ S |xs ∈ X ∩ M k,xs ∈ Bα(xj),∀j ∈ +J k}. In this way, we can obtain a tightened box containing all feasible solutions of kth center, +ˆ +M k={µk|ˆµk ≤ µk ≤ ˆ¯µk}, with the bounds of ath attribute in kth center to be ˆµk +a= +min +s∈Sk ++(M)xk +s,a and +ˆ¯µk +s= max +s∈Sk ++(M)xk +s,a. Figure 4 gives an example of bounds tightening using this method. One challenge +of this ball-based bounds tightening method is that it needs to compute the distance of xs and xj +for all s ∈ S and j ∈ J k. If we know the assignments of the majority of the samples, we need to +do at most S2 times of distance calculation. Note that we only need to do S ∗ K times of distance +calculation to compute a lower bound. To reduce the computational time, we set a threshold on +the maximum number of balls (default: 50) utilized to tighten bounds in our implementation. +Box-based Bounds Tightening: Another strategy to reduce the computation burden is based +on the relaxation of Bα(xj). For any ball Bα(xj), the closed set Rα(xj) = {x | xj − √α ≤ x ≤ +xj + √α} is the smallest box containing Bα(xj). Then we have µk ∈ Rα(xj),∀j ∈ J k. Since Rα(xj) +and M k are all boxes, we can easily compute the tighten bounds ˆ +M k=� +j∈J k Rα(xj) ∩ M k. Figure +5 gives an example of box-based bounds tightening using this method. Obviously, the bounds +generated in Figure 4 is much tighter, while the method in Figure 5 is much faster. Consequently, +if |J k| is small for all clusters, the ball-based bounds tightening method gives more satisfactory +results. While if |J k| is large for any k, box-based bounds tightening provides a cheaper alternative. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +15 +4.1.3. Symmetry Breaking +Another way to get tighter bounds is based on symmetry- +breaking constraints. We add the constraints µ1 +1 ≤ µ2 +1 ≤ ··· ≤ µK +1 in the BB algorithm 1, in which +µk +a denotes ath attribute of kth center. Note that symmetry-breaking constraints and FFT-based +initial seeds in Section 4.1.1 both break symmetry by providing a certain order for the clusters, so +they cannot be combined. Our implementation uses symmetric breaking only when initial seeds are +not found from FFT at the root node. It should be noted that we also add this symmetry-breaking +constraints when using CPLEX to solve the MINLP formulation 4 of the K-center problem. +4.2. Sample Reduction +Some samples may become redundant during the lower and upper bounding procedure without +contributing to the bound improvements. If these samples are proven to be redundant in all the +current and future branch nodes, we can conclude they will not influence the bounding results +anymore, resulting in sample reduction. +4.2.1. Redundant samples in lower bounding +Denote β as the current best lower bound +obtained using methods described in Section 3.1. According to Equation 8, lower bound β(M) is +the maximum value of each sample’s optimal value, βs(M). Based on this observation, we further +define the best maximum distance of sample s to the center region of µ as +αs(M) = min +k∈K max +µk∈Mk ||xs − µk||2 +2, +(13) +It is obvious that βs(M) ≤ αs(M). If αs(M) < β, we have βs(M) < β, which means sample s is +not the sample corresponding to maximum within-cluster distance. Hence, we can conclude that +sample s is a redundant sample in lower bounding for this BB node. Moreover, ∀M ′ ⊂ M, we +have βs(M ′) ≤ αs(M ′) ≤ αs(M). According to the shrinking nature of center region M and the +non-descending nature of lower bound β, if αs(M) < β is true in a BB node, sample s will remain +redundant in all the child nodes of this branch node. It should be noted that αs(M) can be +calculated using an analytical solution similar to βs(M), which is µk +a = µk +a if |µk +a −xs,a| > |¯µk +a −xs,a|, +otherwise ¯µk +a. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +16 +4.2.2. Redundant samples in upper bounding +Obviously, a sample xj cannot be the +center for kth cluster if it does not belong to M k. Moreover, according to Lemma 1, if a sample xj +is the center for cluster K, ||xi −xj||2 +2 ≤ α must hold for all the samples xi assigned to this cluster. +Hence, a sample xj also cannot be the center for kth cluster, if there exists a sample xi assigned to +kth cluster satisfying ||xi −xj||2 +2 > α. If sample xj cannot be centers for any cluster, we denote this +sample xj as a redundant sample for upper bounding. Since the non-ascending nature of upper +bound α, if sample s is redundant for upper bounding in a branch node, it will remain redundant +in all the child nodes of this branch node. It should be noted that the calculations in this method +are identical to Sample-Based Assignment in Section 4.1.1 with no extra calculations introduced +in this method. +4.2.3. Sample reduction +If a sample s is redundant in lower bounding, it implies that sample +s is not the “worst-case sample” corresponding to the maximum within-cluster distance. If a sample +s is redundant in upper bounding, then it means that sample s cannot be a center for any cluster. If +the sample s is redundant in both lower bounding and upper bounding, then removing this sample +will not affect the solution of this BB node and all its child BB nodes. Algorithm 5 describes the +procedure of sample reduction: first, obtain the redundant samples for lower and upper bounding +in each branch node; then, we can delete the samples that are redundant for both lower and upper +bounding in all the branch nodes. In our implementation, this sample reduction method is executed +for every isr iterations. +4.2.4. Effects on computation +Sample reduction can reduce the number of samples that +need to be explored by deleting redundant samples every isr iterations, as described in Algorithm +5. It can also accelerate the calculation of lower bounds and bounds tightening at each iteration. +For the lower bounding method in Section 3.1, we only need to solve the second-stage problems for +non-redundant samples that have been validated by the lower-bounding criterion in Section 4.2.1. +Additionally, once a sample is deemed redundant for lower bounding in a particular node, it will +remain redundant in all child nodes of that node. This means that we do not need to solve the + +Ren et al.: Global Optimization for K-Center of One Billion Samples +17 +second-stage problem for this sample in the current node or any of its child nodes. For the bounds +tightening methods in Section 4.1.2, we only need to calculate the bounds based on non-redundant +samples that have been validated by the upper-bounding criterion in Section 4.2.2. Similarly, if +a sample is redundant for upper bounding in a node, it will remain redundant in all child nodes +of that node, and can be eliminated from the bounds tightening calculations in the current node +and its child nodes. In this way, sample reduction can not only delete redundant samples at every +isr iterations, but also eliminate redundant information in the current node and its child nodes, +thereby accelerating the overall calculation. +4.3. Parallelization +We also provide a parallel implementation of the whole algorithm to accelerate the solving process. +Since our algorithm is primarily executed at the sample level, like βs(M) in the lower bounding, we +can parallelize the algorithm by distributing the dataset to each process equally, then calculating +on each process with the local dataset and communicating the results as needed. The detailed +parallelization framework is shown in Figure 6. Here, the green modules represent the parallel +operations at each process, and the blue modules represent serial reduction operations. This par- +allelization framework is realized utilizing Message-Passing Interface (MPI) and MPI.jl by (Byrne +et al. 2021). +5. Convergence Analysis +As stated in Theorem 1, the branch-and-bound scheme for the K-center problem in Algorithm 1 +converges to the global optimal solution after a finite step. In this section, we present the proof of +this theorem. +Specifically, the branch-and-bound scheme in Algorithm 1 branches on the region of centers, µ, +and generates a rooted tree with the search space M0 at the root node. For the child node at qth +level and lqth iteration, we denote the search space as Mlq. The search space of its child node is +denoted as Mlq+1 satisfying Mlq+1 ⊂ Mlq. We denote the decreasing sequence from the root node + +Ren et al.: Global Optimization for K-Center of One Billion Samples +18 +with M0 to the child node with Mlq as {Mlq}. The search space of kth cluster center at Mlq is +denoted as M k +lq. Along the branch-and-bound process, we can obtain a monotonically non-ascending +upper bound sequence {αi} and a monotonically non-descending lower bound sequence {βi}. +In the following convergence analysis, we adapt the fundamental conclusions from (Horst and +Tuy 2013) to our algorithm. It should be noted that the convergence of the K-center problem +here is stronger than the convergence analysis in (Cao and Zavala 2019) for two-stage nonlinear +optimization problems or the convergence proof in (Hua et al. 2021) for K-means clustering prob- +lem. Both Cao and Zavala (2019) and Hua et al. (2021) guarantee the convergence in the sense of +lim +i→∞αi = lim +i→∞βi = z. They can only produce a global ϵ-optimal solution in a finite number of steps. +While for the K-center problem, the algorithm can obtain an exact optimal solution (e.g., ϵ = 0) +in a finite number of steps. +Definition 1. (Definition IV.3 (Horst and Tuy 2013)) A bounding operation is called finitely +consistent if, at every step, any unfathomed partition element can be further refined and if any +decreasing sequence {Mlq} successively refined partition elements is finite. +Lemma 3. The bounding operation in Algorithm 1 is finitely consistent. +Proof. Firstly, we prove that any unfathomed partition element Mlq can be further refined. Any +unfathomed Mlq satisfies two conditions: (1) ∃|X ∩ M k +lq| > 1,k ∈ K, and (2) αl − β(Mlq) > ϵ,ϵ > 0. +Obviously, there exists at least one partition to be further refined. +We then prove any decreasing sequences {Mlq} successively refined partition elements are finite. +Assuming by contradiction that a sequence {Mlq} is infinite. In our algorithm, since we branch +on the first-stage variable µ corresponding to the diameter of M, this subdivision is exhaustive. +Therefore, we have lim +q→∞δ(Mlq) = 0 and {Mlq} converge to one point ¯µ at each cluster, where δ(Mlq) +is the the diameter of set Mlq. +If this point ¯µ ∈ X, there exists a ball around ¯µ, denoted as Br(¯µ) = {µ | ||µ − ¯µ|| ≤ r}, fulfilling +|X ∩Br(¯µ)| = 1. There exists a level q0 that Mlq ⊂ Br(¯µ),∀q ≥ q0. At this lq0th iteration, according +to the terminal conditions |X ∩ M k +lq| = 1,∀k ∈ K, the partition elements Mlq0 will not be branched + +Ren et al.: Global Optimization for K-Center of One Billion Samples +19 +anymore. Because the dataset X is finite, we have the sequence {Mlq} is finite in this case. If ¯µ ̸⊂ X, +there is a ball around ¯µ, denoted as Br(¯µ) = {µ | ||µ − ¯µ|| ≤ r}, satisfying |X ∩ Br(¯µ)| = 0. There +exists a level q0 that Mlq ⊂ Br(¯µ),∀q ≥ q0. At this lq0th iteration, Mlq0 will be deleted according to +the terminal conditions. Consequently, the sequence {Mlq} is also finite in this case. In conclusion, +it is impossible to exist a sequence {Mlq} that is infinite. +Theorem 2. (Theorem IV.1 (Horst and Tuy 2013)) In a BB procedure, suppose that the bounding +operation is finitely consistent. Then the procedure terminates after finitely many steps. +Lemma 4. Algorithm 1 terminates after finitely many steps. +Proof. From Lemma 3, the bounding operation in Algorithm 1 is finitely consistent. According to +Theorem 2, we have Algorithm 1 terminates after finitely many steps +Finally, we prove that the BB scheme for the K-center problem is convergent: +Theorem 1. Algorithm 1 is convergent to the global optimal solution after a finite step L, with +βL = z = αL, by only branching on the space of µ. +Proof. From Lemma 4, Algorithm 1 terminates after finite steps. The algorithm terminates with +two situations. The first situations is |βl − αl| ≤ ϵ,ϵ ≥ 0. When ϵ is set to be 0, we have βl = z = αl. +The second situation is the branch node set M = ∅. A branch node with M is deleted from M +and not further partitioned if it satisfies β(M) > αl or |X ∩ M k| = 1,∀k ∈ K. In the first case, it +is obvious that this branch node does not contain the global optimal solution µ∗. Therefore, the +branch node with M ′ containing the optimal solution µ∗ is not further partitioned because the +second case |X ∩ M ′k| = 1,∀k ∈ K. After bounds tightening according to the “centers on samples” +constraint, the tightened node M ′ = {µ∗}. Obviously for this tightened node, we have βl = β(M ′) = +z = α(M ′) = αl. In this way, we have proved Theorem 1. +6. Numerical Results +In this section, we report the detailed implementation of our algorithm and the numerical results +on synthetic and real-world datasets. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +20 +6.1. Implementation Details +We denote our tailored reduced-space branch and bound algorithm 1 with and without acceleration +techniques as BB+CF+BT and BB+CF correspondingly. All our algorithms are implemented in Julia, +and the parallel version is realized using Message Passing Interface through the MPI.jl module. +We compare the performance of our algorithm with the state-of-art global optimizer CPLEX 20.1.0 +(Cplex 2020) and the heuristic algorithm, Farthest First Traversal (FFT) as shown in Algorithm +2. The initial points severely influence the results of FFT. Therefore, we execute FFT for 100 trails +with randomly selected initial points and report the best results. As for CPLEX, we use the +MINLP formulation 4 with the symmetry-breaking constraints to solve the K-center problem. +We executed all experiments on the high-performance computing cluster Niagara in the Digital +Research Alliance of Canada. Each computing node of the Niagara cluster has 40 Intel “Skylake” +cores and 188 GiB of RAM. For the global optimizer CPLEX and our algorithms, a time limit of +4 hours is set to compare the performance fairly and avoid unacceptable computational costs. +For our algorithms, there is also an optimality gap limit of 0.1%. The source code is available at +https://github.com/YankaiGroup/global_kcenter_extended. +In order to evaluate the performance extensively, we execute all the algorithms on both syn- +thetic and real-world datasets. The synthetic datasets are generated using Distributions.jl and +Random.jl modules in Julia. We generate the synthetic datasets with 3 Gaussian clusters, 2 +attributes, and varying numbers of samples. As for the real-world datasets, we use 30 datasets +from the UCI Machine Learning Repository (Dua and Graff 2017), datasets Pr2392 from (Padberg +and Rinaldi 1991), Hemi from (Wang et al. 2022) and Taxi from (Schneider 2015). The number +of samples ranges from 150 to 1,120,841,769. The number of attributes ranges from 2 to 68. The +detailed characteristics of datasets can be found in the following result tables. +We report four criteria in the following result tables to compare the performance of algorithms: +upper bound (UB), optimality gap (Gap), the number of solved BB nodes (Nodes), and the run +time (Time). UB is the best objective value of the K-center Problem 1. Gap represents the relative + +Ren et al.: Global Optimization for K-Center of One Billion Samples +21 +difference between the best lower bound (LB) and UB. It is defined as Gap = UB−LB +UB +× 100%. +The optimality gap is a unique property of the deterministic global optimization algorithm. The +heuristic algorithm (FFT) does not have this property. Nodes and Time are the iteration number +and the run time of the BB scheme from the beginning to the termination. +6.2. Serial Results on Synthetic Datasets +Table 1 reports the serial results of synthetic datasets with different numbers of samples and +different desired clusters (K = 3,5,10). Compared with the heuristic method FFT, our algorithm +BB+LD+BT can reduce UB by 29.4% average on these synthetic datasets. These results validate the +conclusion from Garcia-Diaz et al. (2019) that these 2-approximation heuristic algorithms perform +poorly in practice despite the solution quality guarantee. +As for the comparison of global optimizers, the direct usage of CPLEX on Problem 4 could not +converge to a small optimality gap≤ 0.1% within 4 hours on all the synthetic datasets. BB+LD with- +out acceleration techniques can obtain the small optimality gap≤ 0.1% within 4 hours on synthetic +datasets smaller than 42,000 samples with desired clusters K = 3. The algorithm BB+LD+BT can +obtain the best upper bounds and reach a satisfactory gap≤ 0.1% in most experiments within 4 +hours. Moreover, compared with BB+LD, BB+LD+BT needs fewer nodes and less run time to obtain +the same optimality gap. For example, for the Syn-1200 dataset with K = 3, BB+LD need 1,155,375 +nodes and 3609 seconds to reach a gap≤ 0.1%, while BB+LD+BT only needs 23 nodes and 13.5 sec- +onds. These comparisons between BB+LD and BB+LD+BT demonstrate the acceleration techniques in +Section 4 can significantly reduce the search space and accelerate the BB procedure. +6.3. Serial Results on Real-world datasets +Table 2, Table 3, and Table 4 show the serial results on real-world datasets with different sample +numbers and desired cluster numbers (K = 3,5,10). In these tables, we highlight the best results +among these algorithms with the optimality gap≤ 0.1%. These real-world results are consistent +with the results of synthetic datasets. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +22 +The best solutions generated by the heuristic method (FFT) can be far from optimal in these +tables, even for very small datasets. For example, for IRIS dataset, FFT obtains UB of 3.66 while +our algorithm and CPLEX give a UB of 2.04 with ≤ 0.1% gap. Compared with FFT, our algorithm +BB+CF+BT can averagely reduce the UB by 22.2% on these real-world datasets and 25.8% on all the +synthetic and real-world datasets. Even for experiments terminated with large gaps, in most cases, +BB+CF+BT can obtain a smaller UB than FFT. +For small datasets, our algorithms BB+CF and BB+CF+BT can obtain the same UB as CPLEX. +However, CPLEX needs significantly more run time and nodes than our algorithms. For all datasets +with more than 740 samples, CPLEX cannot even give an optimality gap≤ 50% within 4 hours. On +the contrary, BB+CF+BT can obtain the best UB and a satisfactory gap≤ 0.1% for most datasets. +The comparisons of the two versions of our algorithms BB+CF and BB+CF+BT demonstrate that +the acceleration techniques in Section 4 can significantly reduce the computational time and the +number of BB nodes to solve the problems. Remarkably, with these acceleration techniques, we +can even solve several datasets in the root node (Nodes=1), e.g., the datasets iris, HF, and SGC. +Besides, BB+CF+BT results with superscript 1 in these tables mean we can assign K farthest initial +seeds through FFT at the root node as described in Section 4.1.1. We can obtain the initial seeds +for about half of the datasets when K = 3. Moreover, the number of nodes is much smaller for the +datasets with initial seeds than the datasets without initial seeds. This phenomenon indicates the +initial seeds are essential for cluster assignment and bounds tightening since we need at least one +assigned sample at each cluster to execute the sample-based assignment. +For most of the datasets with millions of samples and K = 3 in Table 4, BB+CF+BT can converge +to a small gap≤ 0.1% and provide the best optimal solution after 4 hours of running. To the best +of our knowledge, it is the first time that the K-center problem is solved under a relatively small +gap≤ 0.1% within 4 hours on datasets over 14 million samples in the serial mode. +As a drawback, our algorithm BB+LD+BT still struggles to obtain a small optimality gap when the +desired number of clusters is larger than 3. However, it should be noted the state-of-art global opti- +mizer CPLEX cannot even solve any datasets to gap≤ 50% when K > 3. On the contrary, BB+LD+BT + +Ren et al.: Global Optimization for K-Center of One Billion Samples +23 +can obtain gap≤ 0.1% on most datasets with less than 5 million samples and K = 5. Moreover, for +the cases when our algorithm BB+LD+BT cannot obtain a small optimality gap, it still gives the best +UB among all the algorithms in these experiments. +6.4. Parallel Results on Huge-scale Real-world datasets +To fully utilize the computational ability of high-performance clusters, we implement our algorithm +BB+CF+BT in a parallel manner as shown in Section 4.3. Here, we test the parallel algorithm on +datasets that couldn’t obtain a small gap≤ 0.1% for K = 3 within 4 hours in the serial mode, +including two datasets with ten million samples, HIGGS and BigCross. Moreover, we also extend +the experiments to a billion-scale dataset called Taxi. This billion-scale dataset contains over 1.1 +billion individual taxi trips with 12 attributes in New York City from January 2009 through June +2015. We preprocess the Taxi dataset according to the analysis by Schneider (2015) to remove +outliers and missing values in the dataset. As an outcome shown in Table 5, the parallel version +of BB+CF+BT can reach a small optimality gap≤ 0.1% and a better UB on the datasets BigCross +and Taxi within 4 hours. For the dataset HIGGS, the parallel version achieves a smaller UB and +gap compared to the heuristic method and the serial version. As far as we know, this is the first +time that the K-center problem is solved under a relatively small gap≤ 0.1% within 4 hours on the +billion-scale dataset. +7. Conclusion +We propose a global optimization algorithm for the K-center problem using a tailored reduced +space branch and bound scheme. In this algorithm, we only need to branch on the region of cluster +centers to guarantee convergence to the global optimal solution in a finite step. +We give a two-stage decomposable formulation and an MINLP formulation of the K-center +problem. With this two-stage formulation, we develop a lower bound with closed-form solutions by +relaxing the non-anticipativity constraints and the “centers on sample” constraints. As an outcome, +the proposed bounding methods are extremely computationally efficient with no needs to solve any +optimization sub-problems using any optimizers. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +24 +Along with the BB procedure, we introduce several acceleration techniques based on the MINLP +formulation, including bounds tightening, and sample reduction. Numerical experiments show these +acceleration techniques can significantly reduce the search space and accelerate the solving pro- +cedure. Moreover, we also give a parallel implementation of our algorithm to fully utilize the +computational power of modern high performance clusters. +Extensive numerical experiments have been conducted on synthetic and real-world datasets. +These results exhibit the efficiency of our algorithm: we can solve the real-world datasets with up +to ten million samples in the serial mode and one billion samples in the parallel mode to a +small optimality gap (≤0.1%) within 4 hours. +Finally, we also declare that our algorithm is promised to extend to deal with certain constrained +versions of K-center problems. For example, the capacitated restricted version, absolute and vertex +restricted version (Calik 2013). We are interested in developing these variants in future work. +Acknowledgments +The authors acknowledge funding from the discovery program of the Natural Science and Engineering +Research Council of Canada under grant RGPIN-2019-05499 and the computing resources provided by SciNet +(www.scinethpc.ca) and Digital Research Alliance of Canada (www.alliancecan.ca). Jiayang Ren acknowl- +edges the financial support from the China Scholarship Council. +References +Aggarwal CC, Wolf JL, Yu PSl (2004) Method for targeted advertising on the web based on accumulated +self-learning data, clustering users and semantic node graph techniques. US Patent 6,714,975. +Aloise D, Contardo C (2018) A sampling-based exact algorithm for the solution of the minimax diameter +clustering problem. Journal of Global Optimization 71(3):613–630. +Brusco MJ, Stahl S (2005) Branch-and-Bound Applications in Combinatorial Data Analysis (New York: +Springer). +Byrne S, Wilcox LC, Churavy V (2021) MPI.jl: Julia bindings for the Message Passing Interface. Proceedings +of the JuliaCon Conferences 1(1):68. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +25 +Calik H (2013) Exact Solution Methodologies for the P-Center Problem under Single and Multiple Allocation +Strategies. Theses, Bilkent University. +Cao Y, Zavala VM (2019) A scalable global optimization algorithm for stochastic nonlinear programs. Journal +of Global Optimization 75(2):393–416. +Chen D, Chen R (2009) New relaxation-based algorithms for the optimal solution of the continuous and +discrete p-center problems. Computers & Operations Research 36(5):1646–1655. +Chen R, Handler GY (1987) Relaxation method for the solution of the minimax location-allocation problem +in euclidean space. Naval Research Logistics (NRL) 34(6):775–788. +Contardo C, Iori M, Kramer R (2019) A scalable exact algorithm for the vertex p-center problem. Computers +& Operations Research 103:211–220. +Cook W, Lovasz L, Seymour P, eds. (1995) Combinatorial Optimization. DIMACS Series in Discrete Mathe- +matics and Theoretical Computer Science (Providence, Rhode Island: American Mathematical Society). +Cplex II (2020) V20.1.0: User’s Manual for CPLEX. International Business Machines Corporation . +Dao TBH, Duong KC, Vrain C (2013) A declarative framework for constrained clustering. Machine Learning +and Knowledge Discovery in Databases, volume 8190, 419–434. +Daskin MS (2000) A new approach to solving the vertex p-center problem to optimality: Algorithm and +computational results. Communications of the Operations Research Society of Japan 45(9):428–436. +Davidovi´c T, Ramljak D, ˇSelmi´c M, Teodorovi´c D (2011) Bee colony optimization for the p-center problem. +Computers and Operations Research 38(10):1367–1376. +Dua D, Graff C (2017) UCI machine learning repository. URL http://archive.ics.uci.edu/ml. +Duong KC, Vrain C, et al. (2017) Constrained clustering by constraint programming. Artificial Intelligence +244:70–94. +Dyer ME, Frieze AM (1985) A simple heuristic for the p-centre problem. Operations Research Letters +3(6):285–288. +Elloumi S, Labb´e M, Pochet Y (2004) A new formulation and resolution method for the p-center problem. +INFORMS Journal on Computing 16(1):84–94. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +26 +Garcia-Diaz J, Menchaca-Mendez R, Menchaca-Mendez R, Pomares Hern´andez S, P´erez-Sansalvador JC, +Lakouari N (2019) Approximation algorithms for the vertex K-Center problem: survey and experimental +evaluation. IEEE Access 7:109228–109245. +Garcia-Diaz J, Sanchez-Hernandez J, Menchaca-Mendez R, Menchaca-Mendez R (2017) When a worse +approximation factor gives better performance: a 3-approximation algorithm for the vertex k-center +problem. Journal of Heuristics 23(5):349–366. +Garey M, Johnson D (1979) Computers and Intractability: A Guide to the Theory of NP-Completeness (New +York: W. H. Freeman). +Gonzalez TF (1985) Clustering to minimize the maximum intercluster distance. Theoretical Computer Sci- +ence 38:293–306. +Hansen P, Brimberg J, Uroˇsevi´c D, Mladenovi´c N (2009) Solving large p-median clustering problems by +primal–dual variable neighborhood search. Data Mining and Knowledge Discovery 19(3):351–375. +Hesabi ZR, Tari Z, Goscinski A, Fahad A, Khalil I, Queiroz C (2015) Data summarization techniques for +big data—a survey. Handbook on Data Centers, 1109–1152. +Hochbaum DS, Shmoys DB (1985) A best possible heuristic for the K-Center problem. Mathematics of +Operations Research 10(2):180–184. +Horst R, Tuy H (2013) Global optimization: Deterministic approaches (Springer Science & Business Media). +Hua K, Shi M, Cao Y (2021) A scalable deterministic global optimization algorithm for clustering problems. +International Conference on Machine Learning, 4391–4401. +Ilhan T, Pinar MC (2001) An efficient exact algorithm for the vertex p-center problem. Preprint.[Online]. +Available: http://www.ie.bilkent.edu.tr/mustafap/pubs . +Kaufman L, Rousseeuw PJ (2009) Finding Groups in Data: an Introduction to Cluster Analysis (John Wiley +& Sons). +Kleindessner M, Awasthi P, Morgenstern J (2019) Fair k-center clustering for data summarization. Interna- +tional Conference on Machine Learning, 3448–3457. +Lim A, Rodrigues B, Wang F, Xu Z (2005) K-Center problems with minimum coverage. Theoretical Computer +Science 332(1):1–17. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +27 +Miheliˇc J, Robic B (2005) Solving the k-center problem efficiently with a dominating set algorithm. Journal +of computing and information technology 13:225–234. +Minieka E (1970) The m-Center Problem. SIAM Review 12(01). +Mladenovi´c N, Labb´e M, Hansen P (2003) Solving the p-Center problem with Tabu search and variable +neighborhood Search. Networks 42(1):48–64. +Padberg M, Rinaldi G (1991) A branch-and-cut algorithm for the resolution of large-scale symmetric traveling +salesman problems. SIAM review 33(1):60–100. +Plesn´ık J (1987) A heuristic for the p-center problems in graphs. Discrete Applied Mathematics 17(3):263– +268. +Pullan W (2008) A memetic genetic algorithm for the vertex p-center problem. Evolutionary Computation +16(3):417–436. +Schneider T (2015) Analyzing 1.1 billion NYC taxi and uber trips with a vengeance. URL https:// +toddwschneider.com/posts/analyzing-1-1-billion-nyc-taxi-and-uber-trips-with-a-vengeance/. +Shi M, Hua K, Ren J, Cao Y (2022) Global optimization of K-Center clustering. Proceedings of the 39th +International Conference on Machine Learning, 19956–19966. +Wang E, Cai G, Ballachay R, Cao Y, Trajano HL (2022) Predicting xylose yield in prehydrolysis of hard- +woods: a machine learning approach. Frontiers in Chemical Engineering 84. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +28 +Table 1 +Serial results on synthetic datasets +Dataset +Sam +ple +Dimen +sion +Method +K=3 +K=5 +K=10 +UB +Nodes +Gap +(%) +Time +(s) +UB +Nodes +Gap +(%) +Time +(s) +UB +Nodes +Gap +(%) +Time +(s) +Syn-300 +3.0E+2 +2 +FFT +69.68 +- +- +- +43.33 +- +- +- +21.88 +- +- +- +CPLEX +61.75 +2.9E+4 +≤0.1 +29 +37.14 +2.3E+7 +19.4 +4h +16.06 +1.2E+7 +100.0 +4h +BB+CF +61.75 +5.5E+4 +≤0.1 +46 +37.14 +2.3E+6 +16.2 +4h +15.64 +1.7E+6 +100.0 +4h +BB+CF+BT +61.75 +17 +≤0.1 +13 +37.14 +1,764 +≤0.1 +15 +12.31 2.0E+4 ≤0.1 +38 +Syn-1200 +1.2E+3 +2 +FFT +93.34 +- +- +- +58.46 +- +- +- +30.49 +- +- +- +CPLEX +84.81 +5.8E+6 +1.6 +4h +34.29 +3.5E+6 +7.8 +4h +89.32 +8.1E+5 +100.0 +4h +BB+CF +84.81 +1.2E+6 +≤0.1 +3,609 +34.29 +1.4E+6 +12.5 +4h +21.81 +1.0E+6 +100.0 +4h +BB+CF+BT +84.81 +23 +≤0.11 +14 +34.29 +411 +≤0.1 +15 +14.51 3.0E+4 ≤0.1 +148 +Syn-2100 +2.1E+3 +2 +FFT +106.50 +- +- +- +72.70 +- +- +- +36.04 +- +- +- +CPLEX +95.10 +3.0E+6 +0.2 +4h +49.32 +1.3E+6 +100.0 +4h +193.26 +3.4E+5 +100.0 +4h +BB+CF +95.10 +1.5E+6 +≤0.1 +11,606 +42.58 +1.0E+6 +20.8 +4h +25.78 +5.3E+5 +100.0 +4h +BB+CF+BT +95.10 +17 +≤0.11 +13 +42.58 +455 +≤0.1 +16 +17.65 8.9E+4 ≤0.1 +725 +Syn-42000 +4.2E+4 +2 +FFT +161.98 +- +- +- +96.12 +- +- +- +47.21 +- +- +- +CPLEX +No feasible solution +No feasible solution +No feasible solution +BB+CF +142.33 +1.7E+5 +6.7 +4h +63.40 +1.0E+5 +28.1 +4h +44.24 +5.4E+4 +100.0 +4h +BB+CF+BT 142.33 +103 +≤0.1 +21 +62.77 5.0E+3 ≤0.1 +363 +28.29 +5.8E+4 +36.1 +4h +Syn-210000 2.1E+5 +2 +FFT +175.81 +- +- +- +120.78 +- +- +- +66.79 +- +- +- +CPLEX +No feasible solution +No feasible solution +No feasible solution +BB+CF +168.57 +4.4E+4 +7.0 +4h +77.02 +2.5E+4 +43.8 +4h +53.73 +1.4E+4 +100.0 +4h +BB+CF+BT 168.57 +5 +≤0.11 +21 +71.88 2.4E+3 ≤0.1 1,118 +44.48 +1.2E+4 +72.2 +4h +1 Can assign K initial seeds through FFT at the root node. BB+CF+BT results without this superscript means can not assign initial seeds. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +29 +Table 2 +Serial results on small-scale datasets (S ≤ 1,000) +Dataset +Sam +ple +Dimen +sion +Method +K=3 +K=5 +K=10 +UB +Nodes +Gap +(%) +Time +(s) +UB +Nodes +Gap +(%) +Time +(s) +UB +Nodes +Gap +(%) +Time +(s) +iris +150 +4 +FFT +2.65 +- +- +- +1.80 +- +- +- +0.95 +- +- +- +CPLEX +2.04 +1.2E+5 +≤0.1 +46 +1.54 +2.8E+6 +60.0 +4h +1.21 +1.4E+7 +100.0 +4h +BB+CF +2.04 +1.3E+4 +≤0.1 +17 +1.20 +3.1E+6 +≤0.1 +5,472 +0.74 +2.2E+6 +100.0 +4h +BB+CF+BT +2.04 +1 +≤0.11 +12 +1.20 +409 +≤0.1 +14 +0.66 +9.6E+5 +25.8 +4h +seeds +210 +7 +FFT +13.17 +- +- +- +9.01 +- +- +- +4.48 +- +- +- +CPLEX +10.44 +1.2E+6 +≤0.1 +542 +11.61 +2.5E+6 +96.1 +4h +21.48 +5.6E+5 +100.0 +4h +BB+CF +10.44 +7.2E+3 +≤0.1 +17 +7.22 +2.7E+6 +8.3 +4h +3.51 +1.5E+6 +100.0 +4h +BB+CF+BT +10.44 +21 +≤0.11 +13 +7.22 +1,444 +≤0.1 +15 +2.92 +2.1E+5 ≤0.1 +569 +glass +214 +9 +FFT +27.52 +- +- +- +22.28 +- +- +- +11.73 +- +- +- +CPLEX +Out of memory +Out of memory +Out of memory +BB+CF +27.52 +5.6E+3 +≤0.1 +15 +16.44 +9.7E+5 +≤0.1 +1,522 +10.64 +1.4E+6 +100.0 +4h +BB+CF+BT +27.52 +191 +≤0.1 +13 +16.44 +4.4E+3 +≤0.1 +17 +7.95 +1.7E+6 ≤0.1 9,180 +BM +249 +6 +FFT +1.52E+04 +- +- +- +1.12E+04 +- +- +- +5.33E+03 +- +- +- +CPLEX +No feasible solution +1.48E+04 +8.8E+6 +100.0 +4h +1.63E+04 +2.4E+6 +100.0 +4h +BB+CF +1.05E+04 +1.4E+4 +≤0.1 +22 +6.32E+03 +2.2E+6 +12.0 +4h +5.01E+03 +1.4E+6 +100.0 +4h +BB+CF+BT 1.05E+04 +63 +≤0.11 +13 +6.32E+03 1.8E+4 +≤0.1 +29 +4.98E+03 +6.7E+5 +97.9 +4h +UK +258 +5 +FFT +0.70 +- +- +- +0.57 +- +- +- +0.42 +- +- +- +CPLEX +Out of memory +Out of memory +Out of memory +BB+CF +0.53 +3.2E+5 +≤0.1 +258 +0.43 +1.5E+6 +43.9 +4h +0.33 +1.4E+6 +100.0 +4h +BB+CF+BT +0.53 +1.6E+4 +≤0.1 +23 +0.43 +8.9E+5 +26.9 +4h +0.31 +6.1E+5 +97.3 +4h +HF +299 +12 +FFT +2.69E+10 +- +- +- +1.17E+10 +- +- +- +1.68E+09 +- +- +- +CPLEX +No feasible solution +No feasible solution +No feasible solution +BB+CF +1.72E+10 +339 +≤0.1 +10 +1.02E+10 +2.1E+4 +≤0.1 +44 +1.52E+09 +3.4E+6 +100.0 +4h +BB+CF+BT 1.72E+10 +1 +≤0.11 +12 +1.02E+10 +557 +≤0.1 +14 +1.44E+09 +1.2E+6 +53.2 +4h +Who +440 +8 +FFT +4.58E+09 +- +- +- +3.18E+09 +- +- +- +9.81E+08 +- +- +- +CPLEX +No feasible solution +No feasible solution +No feasible solution +BB+CF +3.49E+09 +3.4E+3 +≤0.1 +15 +2.11E+09 +1.7E+5 +≤0.1 +341 +9.27E+08 +1.5E+6 +100.0 +4h +BB+CF+BT 3.49E+09 +375 +≤0.1 +14 +2.11E+09 2.3E+3 +≤0.1 +16 +8.21E+08 +8.4E+5 +62.0 +4h +HCV +602 +12 +FFT +1.75E+05 +- +- +- +8.38E+04 +- +- +- +3.03E+04 +- +- +- +CPLEX +1.41E+05 +9.5E+5 +≤0.1 +3,720 +8.73E+04 +5.1E+5 +100.0 +4h +4.47E+04 +3.8E+5 +100.0 +4h +BB+CF +1.41E+05 +291 +≤0.1 +10 +6.37E+04 +2.2E+4 +≤0.1 +76 +2.36E+04 +1.4E+6 +100.0 +4h +BB+CF+BT 1.41E+05 +39 +≤0.1 +13 +6.37E+04 +583 +≤0.1 +15 +2.16E+04 7.6E+5 ≤0.1 6,300 +Abs +740 +21 +FFT +1.94E+04 +- +- +- +1.19E+04 +- +- +- +7.81E+03 +- +- +- +CPLEX +1.72E+04 +9.3E+5 +52.0 +4h +3.02E+04 +1.7E+4 +100.0 +4h +No feasible solution +BB+CF +1.39E+04 +3.3E+4 +≤0.1 +153 +9.93E+03 +1.5E+6 +18.9 +4h +6.15E+03 +9.8E+5 +100.0 +4h +BB+CF+BT 1.39E+04 +611 +≤0.1 +15 +9.92E+03 5.0E+4 +≤0.1 +178 +6.37E+03 +4.6E+5 +98.3 +4h +TR +980 +10 +FFT +7.32 +- +- +- +6.42 +- +- +- +4.41 +- +- +- +CPLEX +8.32 +1.6E+6 +54.5 +4h +7.82 +2.0E+5 +100.0 +4h +8.70 +3.3E+4 +100.0 +4h +BB+CF +5.94 +7.4E+5 +≤0.1 +2,953 +4.49 +1.3E+6 +47.7 +4h +3.69 +9.8E+5 +100.0 +4h +BB+CF+BT +5.94 +3.3E+4 +≤0.1 +83 +4.49 +1.0E+6 +24.6 +4h +3.73 +5.0E+5 +99.9 +4h +SGC +1,000 +21 +FFT +1.33E+07 +- +- +- +4.08E+06 +- +- +- +9.50E+05 +- +- +- +CPLEX +9.45E+06 +5.0E+4 +100.0 +4h +1.56E+08 +10 +100.0 +4h +No feasible solution +BB+CF +9.45E+06 +411 +≤0.1 +12 +3.91E+06 +2.8E+4 +≤0.1 +185 +9.50E+05 +9.6E+5 +100.0 +4h +BB+CF+BT 9.45E+06 +1 +≤0.11 +12 +3.91E+06 +1 +≤0.11 +12 +9.50E+05 +5.8E+5 +100.0 +4h +1 Can assign K initial seeds through FFT at the root node. BB+CF+BT results without this superscript means can not assign initial seeds. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +30 +Table 3 +Serial results on large-scale datasets (1,000 4α +||x2 − x3||2 +2 > 4α +||x3 − x1||2 +2 > 4α +Figure 1 +Initial seeds with 3 clusters. In this example, ||x1 − x2||2 +2 > 4α, ||x2 − x3||2 +2 > 4α and ||x3 − x1||2 +2 > 4α. +Therefore, we can arbitrarily assign x1,x2,x3 to 3 distinct clusters. +× xs +M 1 +M 2 +M 3 +β2 +s(M 2) > α +β3 +s(M 3) > α +Figure 2 +Center-based assignment with 3 clusters. In this example, β2 +s(M 2) > α (b2 +s = 0) and β3 +s(M 3) > α (b3 +s = 0). +Therefore, we assign xs to the first cluster (b1 +s = 1). +× x1 +× x2 +×x3 +× xs +M 1 +M 2 +M 3 +||xs − x1||2 +2 > 4α +||xs − x2||2 +2 > 4α +Figure 3 +Sample-based assignment with 3 clusters. Assume we already know that x1,x2,x3 belong to cluster 1,2 +and 3, respectively. xs is the sample to be determined. In this example, ||xs − x1||2 +2 > 4α (b1 +s = 0) and +||xs − x2||2 +2 > 4α (b2 +s = 0). Therefore, xs is assigned to cluster 3 (b3 +s = 1). + +Ren et al.: Global Optimization for K-Center of One Billion Samples +33 +× +× +× +× +× +× +× +× +× +×xi +× +xj +M k +× +× +Bα(xi) +Bα(xj) +√α +√α +Figure 4 +Ball-based bounds tightening in two-dimensional space. In this example, suppose it is determined that +two points xi and xj belong to the Kth cluster. We first compute the index set of samples within all +balls and original box, Sk ++(M) := {s ∈ S |xs ∈ X ∩M k ∩Bα(xi)∩Bα(xj)}. We then generate the smallest +box containing these samples in Sk ++(M). The red rectangle is the tightened bounds we obtain. +× +× +× +× +× +× +× +× +× +×xi +× +xj +M k +× +× +Bα(xi) +Bα(xj) +Rα(xi) +Rα(xj) +√α +√α +Figure 5 +Box-based bounds tightening in two-dimensional space. In this example, we first generate two boxes +with Rα(xi) := {x| xi −√α ≤ x ≤ xi +√α} and Rα(xj) = {x| xj −√α ≤ x ≤ xj +√α}. We then create a +tighten bounds with ˆ +M k=Rα(xi) ∩ Rα(xj) ∩ M k. The red rectangle is the tightened bounds we obtain. + +Ren et al.: Global Optimization for K-Center of One Billion Samples +34 +Dataset +Subset +Subset +Subset +Subset +. . . +Tightened space of centers: +Bound +Tightening +Bound +Tightening +Bound +Tightening +Bound +Tightening +. . . +LB & UB +Bounding +LB & UB +Bounding +LB & UB +Bounding +LB & UB +Bounding +. . . +Lower bounds: +, +Upper bounds: +Sample +reduction +Sample +reduction +Sample +reduction +Sample +reduction +. . . +Index set of redundant samples: + +Update dataset according to the redundant index set +Gather from each process +Gather from each process +Gather from each process +Spread to each proccess equally +Parallel +(Map) +Serial +(Reduce) +Figure 6 +Parallelization of the reduced-space branch and bound scheme + diff --git a/-NAyT4oBgHgl3EQfRPZI/content/tmp_files/load_file.txt b/-NAyT4oBgHgl3EQfRPZI/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..91f185aa00b533132ba126463d9591fc2d9918cb --- /dev/null +++ b/-NAyT4oBgHgl3EQfRPZI/content/tmp_files/load_file.txt @@ -0,0 +1,1575 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf,len=1574 +page_content='A Global Optimization Algorithm for K-Center Clustering of One Billion Samples Jiayang Ren1, Ningning You2, Kaixun Hua1, Chaojie Ji3, Yankai Cao1 1Department of Chemical and Biological Engineering, University of British Columbia, Vancouver, BC, Canada, rjy12307@mail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ubc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ca, kaixun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='hua@ubc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ca, yankai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='cao@ubc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ca 2Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai, China, ningyou@sjtu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='cn 3Department of Mathematics, University of British Columbia, Vancouver, BC, Canada, chaojiej@math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ubc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ca This paper presents a practical global optimization algorithm for the K-center clustering problem, which aims to select K samples as the cluster centers to minimize the maximum within-cluster distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This algorithm is based on a reduced-space branch and bound scheme and guarantees convergence to the global optimum in a finite number of steps by only branching on the regions of centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' To improve efficiency, we have designed a two-stage decomposable lower bound, the solution of which can be derived in a closed form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In addition, we also propose several acceleration techniques to narrow down the region of centers, including bounds tightening, sample reduction, and parallelization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Extensive studies on synthetic and real- world datasets have demonstrated that our algorithm can solve the K-center problems to global optimal within 4 hours for ten million samples in the serial mode and one billion samples in the parallel mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, compared with the state-of-the-art heuristic methods, the global optimum obtained by our algorithm can averagely reduce the objective function by 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8% on all the synthetic and real-world datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Key words : global optimization;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' K-center clustering;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' branch and bound;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' two-stage decomposition;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' bounds tightening 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Introduction Cluster analysis is a task to group similar samples into the same cluster while separating less similar samples into different clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' It is a fundamental unsupervised machine learning task that explores the character of datasets without the need to annotate cluster classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Clustering plays a vital role in various fields, such as data summarization (Kleindessner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2019, Hesabi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='00061v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='OC] 30 Dec 2022 Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 2 2015), customer grouping (Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2004), facility location determination (Hansen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2009), and etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' There are several typical cluster models, including connectivity-based models, centroid-based models, distribution-based models, density-based models, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This work focuses on one of the fun- damental centroid-based clustering models called the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The goal of the K-center problem is to minimize the maximum within-cluster distance (Kaufman and Rousseeuw 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Specifically, given a dataset with S samples and the desired number of clusters K, the K-center problem aims to select K samples from the dataset as centers and to minimize the maximum distance from other samples to its closest center.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The K-center problem is a combinatorial opti- mization problem that has been widely studied in theoretical computer science (Lim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, it has been intensively explored as a symmetric and uncapacitated case of the p-center facility location problem in operations research and management science (Garcia-Diaz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2019), where the number of facilities corresponds to the variable k in a standard K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Formally, provided a K, the objective function of K-center problem can be formulated as follows: min µ∈X max s∈S min k∈K ||xs − µk||2 2 (1) where X = {x1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=',xS} is the dataset with S samples and A attributes, in which xs = [xs,1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=',xs,A] ∈ RA is the sth sample and xs,a is the ath attribute of ith sample, s ∈ S := {1,··· ,S} is the index set of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As to the variables related to clusters, k ∈ K := {1,··· ,K} is the index set of clusters, µ := {µ1,··· ,µK} represents the center set of clusters, µk = [µk 1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=',µk A] ∈ RA is the center of kth cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Here, µ are the variables to be determined in this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We use µ ∈ X to denote the “centers on samples” constraint in which each cluster’s center is restricted to the existing samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Literature Review The K-center problem has been shown to be NP-hard (Gonzalez 1985), which means that it is unlikely to find an optimal solution in polynomial time unless P = NP (Garey and Johnson 1979).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As a remedy, heuristic algorithms, which aim to find a good but not necessarily optimal solution, Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 3 are often used to solve the K-center problem on large-scale datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The study of exact algorithms, which provide an optimal solution but may hardly be terminated in an acceptable time, is restricted to small-scale datasets due to this poor scalability on larger datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Regarding heuristic algorithms, there are several 2-approximation algorithms that provide a theoretical guarantee of their distance from the optimal solution for the K-center problem, but do not provide a guarantee on their running time (Plesn´ık 1987, Gonzalez 1985, Dyer and Frieze 1985, Hochbaum and Shmoys 1985, Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Among these 2-approximation algorithms, Furthest Point First (FPF) algorithm proposed by Gonzalez (1985) is known to be the fastest in practice (Miheliˇc and Robic 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' It works by starting with a randomly selected center and then adding points that are farthest from the existing centers to the center set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Despite their solution quality guarantee, these 2-approximation algorithms may not always provide close-to-optimal solutions in practice (Garcia-Diaz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Another kind of heuristic methods with a polynomial running time but a weaker solution quality guarantee is also intensively studied in the literature (Miheliˇc and Robic 2005, Garcia-Diaz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Besides heuristic methods, there are also metaheuristic methods that do not have a polynomial running time or a solution quality guarantee, but have been shown to provide near-optimal solutions in some cases (Mladenovi´c et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2003, Pullan 2008, Davidovi´c et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In sum, none of these algorithms can deterministically guarantee a global optimal solution for the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In contrast to the numerous heuristic algorithms, the study of exact algorithms, which provide the optimal solution but no solution time guarantee, is still struggling with small-scale problems (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=', thousands of samples).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Early exact works are inspired by the relationship between K-center and set-covering problems (Minieka 1970).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Daskin (2000) transferred the K-center problem to a maximal covering problem, in which the number of covered samples by K centers is maximized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Then, they proposed an iterative binary search scheme to accelerate the solving procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ilhan and Pinar (2001) considered iteratively setting a maximum distance and validating if it can cover all the samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Elloumi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (2004) designed a new integer linear programming formulation of Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 4 the K-center problem, then solved this new formulation by leveraging the binary search scheme and linear programming relaxation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' These algorithms have been shown to provide practical results on small-scale datasets with up to 1,817 samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Another research direction models the K-center problem as a Mixed Integer Programming (MIP) formulation, allowing for the use of the branch and bound technique to find an optimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' However, the vanilla implementations of the branch and bound technique are confined to small-scale datasets with fewer than 250 samples (Brusco and Stahl 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Hence, constraint programming is introduced to address the larger scale K-center problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Dao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (2013) designed two sets of variables describing the cluster centers and sample belongings, then updated the solution through constraint propagation and branching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' They further reduced the sets of variables and proposed a more general framework in Duong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' By involving constraint programming, their works can solve the datasets with up to 5,000 samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Recently, researchers have explored iterative techniques to solve the K-center problem on large datasets by breaking it down into smaller subproblems, such as iterative sampling (Aloise and Contardo 2018) and row generation (Contardo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In Aloise and Contardo (2018), a sampling-based algorithm was proposed that alternates between an exact procedure on a small subset of the data and a heuristic procedure to test the optimality of the current solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This algorithm is capable to solve a dataset containing 581,012 samples within 4 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' However, a report about the optimality gap is absent, which is an important measure of solution quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' According to that computing the covering set for a subset of all samples is cheaper than all (Chen and Handler 1987, Chen and Chen 2009), the same research group proposed a row generation algorithm that relies on computing a much smaller sub-matrix (Contardo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This approach is able to solve a dataset with 1 million samples to a 6% gap in 9 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' However, neither of these methods provides a finite-step convergence guarantee, which results in that they may not always converge to an arbitrarily small gap within a finite number of steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, these methods can lead to a nontrivial optimality gap, especially for large datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Main contributions Recently, Cao and Zavala (2019) proposed a reduced-space spatial branch and bound (BB) scheme for two-stage stochastic nonlinear programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Hua et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (2021) adopted this reduced-space BB scheme and Lagrangian decomposition to solve the K-means clustering problem with a global optimal guarantee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' They solve the large-scale K-means problems up to 210,000 samples to 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6% optimality gap within 4 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' However, these works can not be directly applied to the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The challenge is that the K-center problem minimizes the maximum within-cluster dis- tance instead of the average within-cluster distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, utilizing the Lagrangian decom- position method to compute the lower bound is impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, because of the “centers on samples” constraint in the K-center problem, the direct application of Hua’s algorithm will lead to infeasible solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' To address these challenges, we propose a tailored reduced-space branch and bound algorithm for the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We also design several bounds tightening (BT) and sample reduction methods to accelerate the BB procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Our algorithm is unique in that it only branches on the region of centers, which allows us to guarantee convergence to the global optimum within a finite number of steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In contrast, traditional branch and bound algorithms must branch on all integer variables, which can become computationally infeasible for large-scale problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' By focusing on the limited region of centers, our algorithm is capable to solve even large-scale K-center problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Specifically, the main contributions of this paper are as follows: We propose an exact global optimization algorithm based on a tailored reduced-space branch and bound scheme for the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' To increase efficiency, we develop a two-stage decom- posable lower bounding method with a closed-form solution, eliminating the need for using any MIP solver in the optimization process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, the convergence of our algorithm to the global optimum is guaranteed by branching only on the region of centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We demonstrate that the assignment of clusters can be determined for many samples without knowing the optimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Based on this characteristic, we propose several bounds tightening Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 6 and sample reduction techniques to further reduce the search space and accelerate the solving procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, we also implement a sample-level parallelization strategy to fully utilize computational resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' An open-source Julia implementation of the algorithm is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Extensive studies on 5 synthetic and 33 real-world datasets have demonstrated that we can obtain the global solution for datasets with up to 1 billion samples and 12 features, a feat that has not been achieved so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Especially, compared with the heuristic methods, the global optimum obtained by our algorithm can averagely reduce the objective function by 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8% on all the synthetic and real-world datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This paper is an expanded version of our proceeding publication (Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2022) that includes one new acceleration technique called sample reduction and a parallel implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' These improve- ments have significantly increased the scale of the optimally solvable K-center problem from 14 million samples to 1 billion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In this version, we provide more detailed proof of the global opti- mum convergence of our algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In addition, we have designed more comprehensive numerical experiments on a broader range of datasets and parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Outline This paper is organized as follows: Section 2 introduces a two-stage formulation and a Mixed Integer Nonlinear Programming (MINLP) formulation for the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Section 3 presents the details of the reduced-space branch and bound algorithm, including the lower bound, upper bound methods, and convergence analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Section 4 discusses the accelerating techniques for our BB algorithm, including bounds tightening, sample reduction, and parallel implementation techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Section 5 presents the detailed proof of convergence to the global optimum in the finite steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Section 6 gives extensive numerical results compared with other algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Finally, Section 7 concludes the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' K-center Formulation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Two-stage Formulation To introduce the lower bounding method in the branch and bound scheme, we first propose a two-stage optimization form of the K-center Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The first-stage problem is as follows: z = min µ∈X∩M0 max s∈S Qs(µ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (2) where the center set µ is the so-called first-stage variable, Qs(µ) is the optimal value of the second- stage optimization problem: Qs(µ) = min k∈K ||xs − µk||2 2 (3) We denote a closed set M0 := {µ | µ ≤ µ ≤ ¯µ} as the region of centers, where µ is the lower bound of centers and ¯µ is the upper bound, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=', µk a = min s∈S Xs,a, ¯µk a = max s∈S Xs,a, ∀k ∈ K, a ∈ {1,··· ,A}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Here, the constraint µ ∈ M0 is introduced to simplify the discussion of the BB scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Since M0 can be inferred directly from data, it will not affect the optimal solution of Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Constraint µ ∈ X ∩ M0 means the center of each cluster is selected from the samples belonging to the intersection set of the corresponding region M0 and the dataset X 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' MINLP Formulation To introduce the bounds tightening and sample reduction methods, we propose a MINLP formu- lation of the K-center Problem 1: min µ,d,b,λ d∗ (4a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' dk s ≥ ||xs − µk||2 2 (4b) − N1(1 − bk s) ≤ d∗ s − dk s ≤ 0 (4c) d∗ ≥ d∗ s (4d) � k∈K bk s = 1 (4e) bk s ∈ {0,1} (4f) Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 8 − N2(1 − λk s) ≤ xs − µk ≤ N2(1 − λk s) (4g) � s∈S λk s = 1 (4h) λk s ∈ {0,1} (4i) bk s ≥ λk s (4j) s ∈ S,k ∈ K (4k) where dk s represents the distance between sample xs and center µk, d∗ s denotes the distance between xs and the center of its cluster, N1 and N2 are both arbitrary large values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' bk s and λk s are two binary variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' bk s is equal to 1 if sample xs belongs to the Kth cluster, and 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' λk s is equal to 1 if xs is the center of the Kth cluster µk, and 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Constraint 4c is a big M formulation and ensures that d∗ s = dk s if bk s = 1 and d∗ s ≤ dk s otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Constraint 4e guarantees that sample xs belongs to one cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We also adopt Constraint 4g, 4h and 4j to represent the “centers on samples” constraints, µ ∈ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Specifically, Constraint 4g uses a big M formula to make sure that µk = xs if λk s = 1 and Constraint 4h confirms that each center can only be selected on one sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Constraint 4j ensures that if xs is the center of the Kth cluster, then it is assigned to the Kth cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' It should be noted that the global optimizer CPLEX also relies on this formulation to solve the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Tailored Reduced-space Branch and Bound Scheme This section introduces a tailored reduced-space branch and bound algorithm for the K-center problem with lower and upper bounding methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Lower Bounds In this section, we adopt the two-stage formulation and derive a closed-form solution to obtain the lower bound of the K-center Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' At each node in the BB procedure, we deal with a subset of M0, which is denoted as M, and solve the following problem concerning M: z(M) = min µ∈X∩M max s∈S Qs(µ) (5) Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 9 This problem can be equivalently reformulated as the following problem by duplicating µ across samples and enforcing them to be equal: min µs∈X∩M max s∈S Qs(µs) (6a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' µs = µs+1,s ∈ {1,··· ,S − 1} (6b) We call constraints 6b the non-anticipativity constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' By removing the “centers on samples” constraint µ ∈ X and the non-anticipativity constraints 6b, we attain a lower bound formulation as follow: β(M) := min µs∈M max s∈S Qs(µs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (7) With constraints relaxed, the feasible region of Problem 7 is a superset of Problem 6’s feasible region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, it is obvious that β(M) ≤ z(M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In Problem 7, since µ of each sample is independent, it is obvious that: β(M) = max s∈S min µs∈M Qs(µs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (8) Clearly, problem 8 can be decomposed into S subproblems with β(M) = max s∈S βs(M): βs(M) = min µ∈M Qs(µ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (9) Denote the region of kth cluster’s center as M k := {µk : µk ≤ µk ≤ ¯µk} where µk and ¯µk are the lower and upper bound of µk respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Since Qs(µ) = min k∈K ||xs − µk||2 2, we have βs(M) = min k∈K min µk∈Mk ||xs − µk||2 2, (10) which can be further decomposed into K subsubproblems with βs(M)=min k∈K βk s (M k): βk s (M k) = min µk∈Mk ||xs − µk||2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (11) The analytical solution to Problem 11 is: µk a ∗ = mid{µk a, xs,a, ¯µk a},∀a ∈ {1,··· ,A}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Consequently, the closed-form solution to Problem 7 can be easily computed by the max-min operation on all the samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 10 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Upper Bounds At each node in the BB procedure, the upper bounds of Problem 5 can be obtained by fixing the centers at a candidate feasible solution ˆµ ∈ X ∩ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In this way, we can compute the upper bound base on the following equation: α(M) = max s∈S min k∈K ||xs − ˆµk||2 2 (12) Since ˆµ is a feasible solution, we have z(M) ≤ α(M), ∀ˆµ ∈ X ∩ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In our implementation, we use two methods to obtain the candidate feasible solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' At the root node, we use a heuristic method called Farthest First Traversal (Gonzalez 1985) to obtain a candidate solution ˆµ ∈ X ∩M0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Using this method, we randomly pick an initial point and select each following point as far as possible from the previously selected points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Algorithm 2 describes the details of the farthest first traversal, where d(xs,T) represents the minimum distance from sample xs to any sample in set T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We use FFT(M0) to denote the upper bound obtained using this approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' At a child node with center region M, for each cluster, we select the data sample closest to the middle point of M k as ˆµk, and obtain the corresponding upper bound α(M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Branching Our algorithm only needs to branch on the region of centers, M := {µ : µ ≤ µ ≤ ¯µ}, to guarantee convergence, which would be theoretically discussed in Section 5, o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Since the desired number of clusters is K and the number of attributes is A, the number of possible branching variables is K ×A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The selection of branching variables and values will dramatically influence the BB procedure’s efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In our implementation, we select the max-range variable at each node as the branching variable and the midpoint of this variable as the branching value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Branch and Bound Scheme The detailed reduced-space branch and bound algorithm for the K-center Problem 1 are given in the Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In the algorithm, We use relint(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=') to denote the relative interior of a set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We can Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 11 also establish the convergence of the branch-and-bound scheme in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The BB procedure can generate a monotonically non-ascending sequence {αi} and a monotonically non-descending sequence {βi}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We can show that they both converge to z in a finite number of steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Algorithm 1 is convergent to the global optimal solution after a finite step L, with βL = z = αL, by only branching on the region of centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Since the following acceleration techniques also influence the global convergence in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We present the detailed proof of Theorem 1 in Section 5 after introducing the acceleration techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Algorithm 1 Branch and Bound Scheme Initialization Initialize the iteration index i ← 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Set M ← {M0}, and tolerance ϵ > 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Compute initial lower and upper bounds βi = β(M0), αi = FFT(M0) // Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2 ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Select K farthest initial seeds // Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' while M ̸= ∅ do Node Selection Select a set M satisfying β(M) = βi from M and delete it from M;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Update i ← i + 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Bounds Tightening Cluster Assignment // Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 3;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Bounds Tightening // Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Obtain the tightened node ˆ M;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If i % isr = 0, Sample Reduction // Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 5;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' if ∃|X ∩ M k| > 1,k ∈ K then Branching Find two subsets M1 and M2 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' relint(M1) ∩ relint(M2) = ∅ and M1 ∪ M2 = M;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Update M ← M∪{Mi}, if X ∩M k i ̸= ∅,∀k ∈ K,i ∈ 1,2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' end if Bounding Compute upper and lower bound α(M1), β(M1), α(M2), β(M2);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Let βi ← min{β(M ′) | M ′ ∈ M};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Let αi ← min{αi−1,α(M1),α(M2)};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Remove all M ′ from M if β(M ′) ≥ αi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If βi − αi ≤ ϵ, STOP;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' end while Algorithm 2 Farthest First Traversal Initialization Randomly pick s ∈ S;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Denote T as the set of K points selected by farthest first traversal;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Set T ← {xs};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' while |T| < K do Compute xs ∈ arg max xs∈X d(xs,T) to find xs which is the farthest away from set T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' T ← T ∪ {xs};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' end while Algorithm 3 Cluster Assignment Center Based Assignment for sample xs ∈ X do if bk s == 0,∀k ∈ K then if βk s (M k) > α,∀k ∈ K \\ {k′} then xs is assigned to cluster k′ with bk′ s = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' end if end if end for Sample Based Assignment if All clusters have at least one sample assigned then for sample xs ∈ X do if ∀k ∈ K \\ {k′}, ∃ xj assigned to kth cluster, ||xs − xj||2 2 > 4α then xs is assigned to cluster k′ with bk′ s = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' end if end for end if Algorithm 4 Bounds Tightening Given the current center region M and upper bound α for Cluster k ∈ K do Obtain the assigned sample set J k using Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Compute the ball-based or box-boxed area of each assigned sample, Bα(xj) or Rα(xj);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Tighten the center region by M k ∩Bα(xj) or M k ∩Rα(xj) , ∀j ∈ J k;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Further tighten according to the “centers on samples” constraint;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' end for Algorithm 5 Sample Reduction Initialize the index set of redundant samples as R ← S for all BB nodes do Obtain the index set of redundant samples for lower bounds, RLB, according to the criterion in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Obtain the index set of redundant samples for upper bounds, RUB, according to the criterion in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Update the redundant index set, R ← R ∩ RLB ∩ RUB;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' end for Delete samples in the redundant set R from the current dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 12 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Acceleration Techniques Although the lower bound introduced in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 is enough to guarantee convergence, it might not be very tight, leading to tremendous iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, we propose several acceleration techniques to reduce the search space and speed up the BB procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Since Algorithm 1 only branches on the region of centers M := {µ : µ ≤ µ ≤ ¯µ}, we focus on reducing the region of centers to accelerate the solution process while not excluding the optimal solution of the original K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Bounds Tightening Techniques In each node, the assignment of many samples (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=', which cluster the sample is assigned to) can be pre-determined by the geometrical relationship of samples and regions of centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This information can be further used to reduce the region of µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Cluster Assignment The task of cluster assignment is to pre-determine some values of bk s in the MINLP Formulation 4 at each BB node before finding the global optimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We first demonstrate the relations between samples and centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Denote α as the upper bound obtained using methods described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Then based on Objective 4a and Constraint 4d, we have d∗ s ≤ d∗ ≤ α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' From Constraint 4b and 4c, we can conclude that if bk s = 1, then ||xs −µk||2 2 ≤ d∗ s ≤ α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, we can derive Lemma 1: Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If sample xs is in the kth cluster, then ||xs − µk||2 2 ≤ α, where α is an upper bound of the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Besides the relation between samples and centers, cluster assignments may also be determined from the distance of two samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Suppose sample xi and xj belong to the kth cluster, then from Lemma 1 we have ||xi − µk||2 2 ≤ α and ||xj − µk||2 2 ≤ α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Thus ||xi − xj||2 2 = ||xi − µk + µk − xj||2 2 ≤ (||xi − µk||2 + ||µk − xj||2)2 ≤ 4α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, we have Lemma 2: Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If two samples xi and xj are in the same cluster, then ||xi − xj||2 2 ≤ 4α where α is an upper bound of the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 13 We propose three methods for pre-assigning samples based on these two Lemmas: K Farthest Initial Seeds: From Lemma 2, if ||xi − xj||2 2 > 4α, then xi and xj are not in the same cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' At the root node, if we can find K samples with the distance between any two of these samples xi and xj satisfying ||xi − xj||2 2 > 4α, then we can conclude that these K samples must belong to K distinct clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Figure 1 shows an example of this property, in which three samples are pre-assigned to 3 distinct clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We call these K points initial seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' To find the initial seeds, every two samples must be as far as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, in our implementation, we use the heuristic Farthest First Traversal (FFT) (Algorithm 2) to obtain K farthest points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For about half of the case studies shown in Section 6, we can obtain the initial seeds using FFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' However, for other cases, initial seeds can not be obtained using FFT, or the initial seeds may not even exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Center-Based Assignment: From Lemma 1, if ||xs − µk||2 2 > α, then xs does not belong to kth cluster, which is bk s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Consequently, if we can determine that bk s = 0,∀k ∈ K \\ {k′}, then bk′ s = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' However, the value of µ here is unknown before obtaining the optimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' One observation is that if the BB node with region M contains the optimal solution, then we have βk s (M k) = min µk∈Mk ||xs − µk||2 2 ≤ ||xs − µk||2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, if βk s (M k) > α, sample xs is not in the kth cluster and bk s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In summary, for sample xs, if ∀k ∈ K \\ {k′}, βk s (M k) > α, then xs is assigned to cluster k′ with bk′ s = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Figure 2 illustrates an example in two-dimensional space with three clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This center-based method can be adopted at every node of the BB scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Since βk s (M k) is already obtained when computing the lower bound in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1, there is no additional compu- tational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Nevertheless, we do not need to apply this method at the root node since M 1 0 = ··· = M K 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As the BB scheme continues branching on the regions of centers, M k becomes more and more different from others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Then more samples can be pre-assigned using this center-based method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Sample-Based Assignment: Besides utilizing centers to pre-assign samples, assigned samples can also help pre-assign other samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' From Lemma 2, if ||xi −xj||2 2 > 4α, then xi and xj are not in the same cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If xj belongs to kth cluster, then obviously xi cannot be assigned to kthe cluster and bk i = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' With this relationship, if all the other K − 1 clusters are excluded, xi will be assigned to the remaining cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Figure 3 shows an example of the sample-based assignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 14 There is a prerequisite to using this sample-based method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For each cluster, there must be at least one sample already assigned to the cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Based on this prerequisite, sample-based assignment is utilized only after at least one sample is pre-assigned for each cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Bounds Tightening In this subsection, we adopt the Bounds Tightening (BT) tech- nique and the cluster assignment information to reduce the region of µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ball-based Bounds Tightening: For a sample j, Bα(xj)={x| ||x − xj||2 2 ≤ α} represents the ball with center xj and radius √α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' By using cluster assignment methods in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1, assuming that sample j belongs to kth cluster is already known, by Lemma 1, then µk ∈ Bα(xj) holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We use J k to denote the index of all samples assigned to kth cluster, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=', J k = {j ∈ S | bk j = 1}, then µk ∈ Bα(xj),∀j ∈ J k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Besides this, we also know that µk ∈ X ∩ M k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Denote Sk + as the index set of samples satisfying all these constraints, Sk +(M) := {s ∈ S |xs ∈ X ∩ M k,xs ∈ Bα(xj),∀j ∈ J k}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In this way, we can obtain a tightened box containing all feasible solutions of kth center, ˆ M k={µk|ˆµk ≤ µk ≤ ˆ¯µk}, with the bounds of ath attribute in kth center to be ˆµk a= min s∈Sk +(M)xk s,a and ˆ¯µk s= max s∈Sk +(M)xk s,a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Figure 4 gives an example of bounds tightening using this method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' One challenge of this ball-based bounds tightening method is that it needs to compute the distance of xs and xj for all s ∈ S and j ∈ J k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If we know the assignments of the majority of the samples, we need to do at most S2 times of distance calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Note that we only need to do S ∗ K times of distance calculation to compute a lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' To reduce the computational time, we set a threshold on the maximum number of balls (default: 50) utilized to tighten bounds in our implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Box-based Bounds Tightening: Another strategy to reduce the computation burden is based on the relaxation of Bα(xj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For any ball Bα(xj), the closed set Rα(xj) = {x | xj − √α ≤ x ≤ xj + √α} is the smallest box containing Bα(xj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Then we have µk ∈ Rα(xj),∀j ∈ J k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Since Rα(xj) and M k are all boxes, we can easily compute the tighten bounds ˆ M k=� j∈J k Rα(xj) ∩ M k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Figure 5 gives an example of box-based bounds tightening using this method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Obviously, the bounds generated in Figure 4 is much tighter, while the method in Figure 5 is much faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Consequently, if |J k| is small for all clusters, the ball-based bounds tightening method gives more satisfactory results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' While if |J k| is large for any k, box-based bounds tightening provides a cheaper alternative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 15 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Symmetry Breaking Another way to get tighter bounds is based on symmetry- breaking constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We add the constraints µ1 1 ≤ µ2 1 ≤ ··· ≤ µK 1 in the BB algorithm 1, in which µk a denotes ath attribute of kth center.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Note that symmetry-breaking constraints and FFT-based initial seeds in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 both break symmetry by providing a certain order for the clusters, so they cannot be combined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Our implementation uses symmetric breaking only when initial seeds are not found from FFT at the root node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' It should be noted that we also add this symmetry-breaking constraints when using CPLEX to solve the MINLP formulation 4 of the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Sample Reduction Some samples may become redundant during the lower and upper bounding procedure without contributing to the bound improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If these samples are proven to be redundant in all the current and future branch nodes, we can conclude they will not influence the bounding results anymore, resulting in sample reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Redundant samples in lower bounding Denote β as the current best lower bound obtained using methods described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' According to Equation 8, lower bound β(M) is the maximum value of each sample’s optimal value, βs(M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Based on this observation, we further define the best maximum distance of sample s to the center region of µ as αs(M) = min k∈K max µk∈Mk ||xs − µk||2 2, (13) It is obvious that βs(M) ≤ αs(M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If αs(M) < β, we have βs(M) < β, which means sample s is not the sample corresponding to maximum within-cluster distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Hence, we can conclude that sample s is a redundant sample in lower bounding for this BB node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, ∀M ′ ⊂ M, we have βs(M ′) ≤ αs(M ′) ≤ αs(M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' According to the shrinking nature of center region M and the non-descending nature of lower bound β, if αs(M) < β is true in a BB node, sample s will remain redundant in all the child nodes of this branch node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' It should be noted that αs(M) can be calculated using an analytical solution similar to βs(M), which is µk a = µk a if |µk a −xs,a| > |¯µk a −xs,a|, otherwise ¯µk a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 16 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Redundant samples in upper bounding Obviously, a sample xj cannot be the center for kth cluster if it does not belong to M k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, according to Lemma 1, if a sample xj is the center for cluster K, ||xi −xj||2 2 ≤ α must hold for all the samples xi assigned to this cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Hence, a sample xj also cannot be the center for kth cluster, if there exists a sample xi assigned to kth cluster satisfying ||xi −xj||2 2 > α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If sample xj cannot be centers for any cluster, we denote this sample xj as a redundant sample for upper bounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Since the non-ascending nature of upper bound α, if sample s is redundant for upper bounding in a branch node, it will remain redundant in all the child nodes of this branch node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' It should be noted that the calculations in this method are identical to Sample-Based Assignment in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 with no extra calculations introduced in this method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Sample reduction If a sample s is redundant in lower bounding, it implies that sample s is not the “worst-case sample” corresponding to the maximum within-cluster distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If a sample s is redundant in upper bounding, then it means that sample s cannot be a center for any cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If the sample s is redundant in both lower bounding and upper bounding, then removing this sample will not affect the solution of this BB node and all its child BB nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Algorithm 5 describes the procedure of sample reduction: first, obtain the redundant samples for lower and upper bounding in each branch node;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' then, we can delete the samples that are redundant for both lower and upper bounding in all the branch nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In our implementation, this sample reduction method is executed for every isr iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Effects on computation Sample reduction can reduce the number of samples that need to be explored by deleting redundant samples every isr iterations, as described in Algorithm 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' It can also accelerate the calculation of lower bounds and bounds tightening at each iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For the lower bounding method in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1, we only need to solve the second-stage problems for non-redundant samples that have been validated by the lower-bounding criterion in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Additionally, once a sample is deemed redundant for lower bounding in a particular node, it will remain redundant in all child nodes of that node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This means that we do not need to solve the Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 17 second-stage problem for this sample in the current node or any of its child nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For the bounds tightening methods in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2, we only need to calculate the bounds based on non-redundant samples that have been validated by the upper-bounding criterion in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Similarly, if a sample is redundant for upper bounding in a node, it will remain redundant in all child nodes of that node, and can be eliminated from the bounds tightening calculations in the current node and its child nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In this way, sample reduction can not only delete redundant samples at every isr iterations, but also eliminate redundant information in the current node and its child nodes, thereby accelerating the overall calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Parallelization We also provide a parallel implementation of the whole algorithm to accelerate the solving process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Since our algorithm is primarily executed at the sample level, like βs(M) in the lower bounding, we can parallelize the algorithm by distributing the dataset to each process equally, then calculating on each process with the local dataset and communicating the results as needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The detailed parallelization framework is shown in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Here, the green modules represent the parallel operations at each process, and the blue modules represent serial reduction operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This par- allelization framework is realized utilizing Message-Passing Interface (MPI) and MPI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='jl by (Byrne et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Convergence Analysis As stated in Theorem 1, the branch-and-bound scheme for the K-center problem in Algorithm 1 converges to the global optimal solution after a finite step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In this section, we present the proof of this theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Specifically, the branch-and-bound scheme in Algorithm 1 branches on the region of centers, µ, and generates a rooted tree with the search space M0 at the root node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For the child node at qth level and lqth iteration, we denote the search space as Mlq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The search space of its child node is denoted as Mlq+1 satisfying Mlq+1 ⊂ Mlq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We denote the decreasing sequence from the root node Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 18 with M0 to the child node with Mlq as {Mlq}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The search space of kth cluster center at Mlq is denoted as M k lq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Along the branch-and-bound process, we can obtain a monotonically non-ascending upper bound sequence {αi} and a monotonically non-descending lower bound sequence {βi}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In the following convergence analysis, we adapt the fundamental conclusions from (Horst and Tuy 2013) to our algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' It should be noted that the convergence of the K-center problem here is stronger than the convergence analysis in (Cao and Zavala 2019) for two-stage nonlinear optimization problems or the convergence proof in (Hua et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2021) for K-means clustering prob- lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Both Cao and Zavala (2019) and Hua et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (2021) guarantee the convergence in the sense of lim i→∞αi = lim i→∞βi = z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' They can only produce a global ϵ-optimal solution in a finite number of steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' While for the K-center problem, the algorithm can obtain an exact optimal solution (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=', ϵ = 0) in a finite number of steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (Definition IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3 (Horst and Tuy 2013)) A bounding operation is called finitely consistent if, at every step, any unfathomed partition element can be further refined and if any decreasing sequence {Mlq} successively refined partition elements is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The bounding operation in Algorithm 1 is finitely consistent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Firstly, we prove that any unfathomed partition element Mlq can be further refined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Any unfathomed Mlq satisfies two conditions: (1) ∃|X ∩ M k lq| > 1,k ∈ K, and (2) αl − β(Mlq) > ϵ,ϵ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Obviously, there exists at least one partition to be further refined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We then prove any decreasing sequences {Mlq} successively refined partition elements are finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Assuming by contradiction that a sequence {Mlq} is infinite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In our algorithm, since we branch on the first-stage variable µ corresponding to the diameter of M, this subdivision is exhaustive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, we have lim q→∞δ(Mlq) = 0 and {Mlq} converge to one point ¯µ at each cluster, where δ(Mlq) is the the diameter of set Mlq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If this point ¯µ ∈ X, there exists a ball around ¯µ, denoted as Br(¯µ) = {µ | ||µ − ¯µ|| ≤ r}, fulfilling |X ∩Br(¯µ)| = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' There exists a level q0 that Mlq ⊂ Br(¯µ),∀q ≥ q0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' At this lq0th iteration, according to the terminal conditions |X ∩ M k lq| = 1,∀k ∈ K, the partition elements Mlq0 will not be branched Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 19 anymore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Because the dataset X is finite, we have the sequence {Mlq} is finite in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' If ¯µ ̸⊂ X, there is a ball around ¯µ, denoted as Br(¯µ) = {µ | ||µ − ¯µ|| ≤ r}, satisfying |X ∩ Br(¯µ)| = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' There exists a level q0 that Mlq ⊂ Br(¯µ),∀q ≥ q0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' At this lq0th iteration, Mlq0 will be deleted according to the terminal conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Consequently, the sequence {Mlq} is also finite in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In conclusion, it is impossible to exist a sequence {Mlq} that is infinite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (Theorem IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 (Horst and Tuy 2013)) In a BB procedure, suppose that the bounding operation is finitely consistent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Then the procedure terminates after finitely many steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Algorithm 1 terminates after finitely many steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' From Lemma 3, the bounding operation in Algorithm 1 is finitely consistent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' According to Theorem 2, we have Algorithm 1 terminates after finitely many steps Finally, we prove that the BB scheme for the K-center problem is convergent: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Algorithm 1 is convergent to the global optimal solution after a finite step L, with βL = z = αL, by only branching on the space of µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' From Lemma 4, Algorithm 1 terminates after finite steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The algorithm terminates with two situations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The first situations is |βl − αl| ≤ ϵ,ϵ ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' When ϵ is set to be 0, we have βl = z = αl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The second situation is the branch node set M = ∅.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' A branch node with M is deleted from M and not further partitioned if it satisfies β(M) > αl or |X ∩ M k| = 1,∀k ∈ K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In the first case, it is obvious that this branch node does not contain the global optimal solution µ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, the branch node with M ′ containing the optimal solution µ∗ is not further partitioned because the second case |X ∩ M ′k| = 1,∀k ∈ K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' After bounds tightening according to the “centers on samples” constraint, the tightened node M ′ = {µ∗}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Obviously for this tightened node, we have βl = β(M ′) = z = α(M ′) = αl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In this way, we have proved Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Numerical Results In this section, we report the detailed implementation of our algorithm and the numerical results on synthetic and real-world datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 20 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Implementation Details We denote our tailored reduced-space branch and bound algorithm 1 with and without acceleration techniques as BB+CF+BT and BB+CF correspondingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' All our algorithms are implemented in Julia, and the parallel version is realized using Message Passing Interface through the MPI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='jl module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We compare the performance of our algorithm with the state-of-art global optimizer CPLEX 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 (Cplex 2020) and the heuristic algorithm, Farthest First Traversal (FFT) as shown in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The initial points severely influence the results of FFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Therefore, we execute FFT for 100 trails with randomly selected initial points and report the best results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As for CPLEX, we use the MINLP formulation 4 with the symmetry-breaking constraints to solve the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We executed all experiments on the high-performance computing cluster Niagara in the Digital Research Alliance of Canada.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Each computing node of the Niagara cluster has 40 Intel “Skylake” cores and 188 GiB of RAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For the global optimizer CPLEX and our algorithms, a time limit of 4 hours is set to compare the performance fairly and avoid unacceptable computational costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For our algorithms, there is also an optimality gap limit of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The source code is available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='com/YankaiGroup/global_kcenter_extended.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In order to evaluate the performance extensively, we execute all the algorithms on both syn- thetic and real-world datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The synthetic datasets are generated using Distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='jl and Random.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='jl modules in Julia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We generate the synthetic datasets with 3 Gaussian clusters, 2 attributes, and varying numbers of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As for the real-world datasets, we use 30 datasets from the UCI Machine Learning Repository (Dua and Graff 2017), datasets Pr2392 from (Padberg and Rinaldi 1991), Hemi from (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 2022) and Taxi from (Schneider 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The number of samples ranges from 150 to 1,120,841,769.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The number of attributes ranges from 2 to 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The detailed characteristics of datasets can be found in the following result tables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We report four criteria in the following result tables to compare the performance of algorithms: upper bound (UB), optimality gap (Gap), the number of solved BB nodes (Nodes), and the run time (Time).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' UB is the best objective value of the K-center Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Gap represents the relative Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 21 difference between the best lower bound (LB) and UB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' It is defined as Gap = UB−LB UB × 100%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The optimality gap is a unique property of the deterministic global optimization algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The heuristic algorithm (FFT) does not have this property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Nodes and Time are the iteration number and the run time of the BB scheme from the beginning to the termination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Serial Results on Synthetic Datasets Table 1 reports the serial results of synthetic datasets with different numbers of samples and different desired clusters (K = 3,5,10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Compared with the heuristic method FFT, our algorithm BB+LD+BT can reduce UB by 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4% average on these synthetic datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' These results validate the conclusion from Garcia-Diaz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (2019) that these 2-approximation heuristic algorithms perform poorly in practice despite the solution quality guarantee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As for the comparison of global optimizers, the direct usage of CPLEX on Problem 4 could not converge to a small optimality gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% within 4 hours on all the synthetic datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' BB+LD with- out acceleration techniques can obtain the small optimality gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% within 4 hours on synthetic datasets smaller than 42,000 samples with desired clusters K = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The algorithm BB+LD+BT can obtain the best upper bounds and reach a satisfactory gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% in most experiments within 4 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, compared with BB+LD, BB+LD+BT needs fewer nodes and less run time to obtain the same optimality gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For example, for the Syn-1200 dataset with K = 3, BB+LD need 1,155,375 nodes and 3609 seconds to reach a gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1%, while BB+LD+BT only needs 23 nodes and 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5 sec- onds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' These comparisons between BB+LD and BB+LD+BT demonstrate the acceleration techniques in Section 4 can significantly reduce the search space and accelerate the BB procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Serial Results on Real-world datasets Table 2, Table 3, and Table 4 show the serial results on real-world datasets with different sample numbers and desired cluster numbers (K = 3,5,10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In these tables, we highlight the best results among these algorithms with the optimality gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' These real-world results are consistent with the results of synthetic datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 22 The best solutions generated by the heuristic method (FFT) can be far from optimal in these tables, even for very small datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For example, for IRIS dataset, FFT obtains UB of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='66 while our algorithm and CPLEX give a UB of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='04 with ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Compared with FFT, our algorithm BB+CF+BT can averagely reduce the UB by 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2% on these real-world datasets and 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8% on all the synthetic and real-world datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Even for experiments terminated with large gaps, in most cases, BB+CF+BT can obtain a smaller UB than FFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For small datasets, our algorithms BB+CF and BB+CF+BT can obtain the same UB as CPLEX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' However, CPLEX needs significantly more run time and nodes than our algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For all datasets with more than 740 samples, CPLEX cannot even give an optimality gap≤ 50% within 4 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' On the contrary, BB+CF+BT can obtain the best UB and a satisfactory gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% for most datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' The comparisons of the two versions of our algorithms BB+CF and BB+CF+BT demonstrate that the acceleration techniques in Section 4 can significantly reduce the computational time and the number of BB nodes to solve the problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Remarkably, with these acceleration techniques, we can even solve several datasets in the root node (Nodes=1), e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=', the datasets iris, HF, and SGC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Besides, BB+CF+BT results with superscript 1 in these tables mean we can assign K farthest initial seeds through FFT at the root node as described in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We can obtain the initial seeds for about half of the datasets when K = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, the number of nodes is much smaller for the datasets with initial seeds than the datasets without initial seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This phenomenon indicates the initial seeds are essential for cluster assignment and bounds tightening since we need at least one assigned sample at each cluster to execute the sample-based assignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For most of the datasets with millions of samples and K = 3 in Table 4, BB+CF+BT can converge to a small gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% and provide the best optimal solution after 4 hours of running.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' To the best of our knowledge, it is the first time that the K-center problem is solved under a relatively small gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% within 4 hours on datasets over 14 million samples in the serial mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As a drawback, our algorithm BB+LD+BT still struggles to obtain a small optimality gap when the desired number of clusters is larger than 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' However, it should be noted the state-of-art global opti- mizer CPLEX cannot even solve any datasets to gap≤ 50% when K > 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' On the contrary, BB+LD+BT Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 23 can obtain gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% on most datasets with less than 5 million samples and K = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, for the cases when our algorithm BB+LD+BT cannot obtain a small optimality gap, it still gives the best UB among all the algorithms in these experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Parallel Results on Huge-scale Real-world datasets To fully utilize the computational ability of high-performance clusters, we implement our algorithm BB+CF+BT in a parallel manner as shown in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Here, we test the parallel algorithm on datasets that couldn’t obtain a small gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% for K = 3 within 4 hours in the serial mode, including two datasets with ten million samples, HIGGS and BigCross.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, we also extend the experiments to a billion-scale dataset called Taxi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' This billion-scale dataset contains over 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 billion individual taxi trips with 12 attributes in New York City from January 2009 through June 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We preprocess the Taxi dataset according to the analysis by Schneider (2015) to remove outliers and missing values in the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As an outcome shown in Table 5, the parallel version of BB+CF+BT can reach a small optimality gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% and a better UB on the datasets BigCross and Taxi within 4 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For the dataset HIGGS, the parallel version achieves a smaller UB and gap compared to the heuristic method and the serial version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As far as we know, this is the first time that the K-center problem is solved under a relatively small gap≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1% within 4 hours on the billion-scale dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Conclusion We propose a global optimization algorithm for the K-center problem using a tailored reduced space branch and bound scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' In this algorithm, we only need to branch on the region of cluster centers to guarantee convergence to the global optimal solution in a finite step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We give a two-stage decomposable formulation and an MINLP formulation of the K-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' With this two-stage formulation, we develop a lower bound with closed-form solutions by relaxing the non-anticipativity constraints and the “centers on sample” constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' As an outcome, the proposed bounding methods are extremely computationally efficient with no needs to solve any optimization sub-problems using any optimizers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 24 Along with the BB procedure, we introduce several acceleration techniques based on the MINLP formulation, including bounds tightening, and sample reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Numerical experiments show these acceleration techniques can significantly reduce the search space and accelerate the solving pro- cedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Moreover, we also give a parallel implementation of our algorithm to fully utilize the computational power of modern high performance clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Extensive numerical experiments have been conducted on synthetic and real-world datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' These results exhibit the efficiency of our algorithm: we can solve the real-world datasets with up to ten million samples in the serial mode and one billion samples in the parallel mode to a small optimality gap (≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1%) within 4 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Finally, we also declare that our algorithm is promised to extend to deal with certain constrained versions of K-center problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' For example, the capacitated restricted version, absolute and vertex restricted version (Calik 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' We are interested in developing these variants in future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Acknowledgments The authors acknowledge funding from the discovery program of the Natural Science and Engineering Research Council of Canada under grant RGPIN-2019-05499 and the computing resources provided by SciNet (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='scinethpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ca) and Digital Research Alliance of Canada (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='alliancecan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ca).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Jiayang Ren acknowl- edges the financial support from the China Scholarship Council.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' References Aggarwal CC, Wolf JL, Yu PSl (2004) Method for targeted advertising on the web based on accumulated self-learning data, clustering users and semantic node graph techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' US Patent 6,714,975.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Aloise D, Contardo C (2018) A sampling-based exact algorithm for the solution of the minimax diameter clustering problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Journal of Global Optimization 71(3):613–630.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Brusco MJ, Stahl S (2005) Branch-and-Bound Applications in Combinatorial Data Analysis (New York: Springer).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Byrne S, Wilcox LC, Churavy V (2021) MPI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='jl: Julia bindings for the Message Passing Interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Proceedings of the JuliaCon Conferences 1(1):68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 25 Calik H (2013) Exact Solution Methodologies for the P-Center Problem under Single and Multiple Allocation Strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Theses, Bilkent University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Cao Y, Zavala VM (2019) A scalable global optimization algorithm for stochastic nonlinear programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Journal of Global Optimization 75(2):393–416.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Chen D, Chen R (2009) New relaxation-based algorithms for the optimal solution of the continuous and discrete p-center problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Computers & Operations Research 36(5):1646–1655.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Chen R, Handler GY (1987) Relaxation method for the solution of the minimax location-allocation problem in euclidean space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Naval Research Logistics (NRL) 34(6):775–788.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Contardo C, Iori M, Kramer R (2019) A scalable exact algorithm for the vertex p-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Computers & Operations Research 103:211–220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Cook W, Lovasz L, Seymour P, eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (1995) Combinatorial Optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' DIMACS Series in Discrete Mathe- matics and Theoretical Computer Science (Providence, Rhode Island: American Mathematical Society).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Cplex II (2020) V20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0: User’s Manual for CPLEX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' International Business Machines Corporation .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Dao TBH, Duong KC, Vrain C (2013) A declarative framework for constrained clustering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Machine Learning and Knowledge Discovery in Databases, volume 8190, 419–434.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Daskin MS (2000) A new approach to solving the vertex p-center problem to optimality: Algorithm and computational results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Communications of the Operations Research Society of Japan 45(9):428–436.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Davidovi´c T, Ramljak D, ˇSelmi´c M, Teodorovi´c D (2011) Bee colony optimization for the p-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Computers and Operations Research 38(10):1367–1376.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Dua D, Graff C (2017) UCI machine learning repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' URL http://archive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='uci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='edu/ml.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Duong KC, Vrain C, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' (2017) Constrained clustering by constraint programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Artificial Intelligence 244:70–94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Dyer ME, Frieze AM (1985) A simple heuristic for the p-centre problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Operations Research Letters 3(6):285–288.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Elloumi S, Labb´e M, Pochet Y (2004) A new formulation and resolution method for the p-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' INFORMS Journal on Computing 16(1):84–94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 26 Garcia-Diaz J, Menchaca-Mendez R, Menchaca-Mendez R, Pomares Hern´andez S, P´erez-Sansalvador JC, Lakouari N (2019) Approximation algorithms for the vertex K-Center problem: survey and experimental evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' IEEE Access 7:109228–109245.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Garcia-Diaz J, Sanchez-Hernandez J, Menchaca-Mendez R, Menchaca-Mendez R (2017) When a worse approximation factor gives better performance: a 3-approximation algorithm for the vertex k-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Journal of Heuristics 23(5):349–366.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Garey M, Johnson D (1979) Computers and Intractability: A Guide to the Theory of NP-Completeness (New York: W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Freeman).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Gonzalez TF (1985) Clustering to minimize the maximum intercluster distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Theoretical Computer Sci- ence 38:293–306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Hansen P, Brimberg J, Uroˇsevi´c D, Mladenovi´c N (2009) Solving large p-median clustering problems by primal–dual variable neighborhood search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Data Mining and Knowledge Discovery 19(3):351–375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Hesabi ZR, Tari Z, Goscinski A, Fahad A, Khalil I, Queiroz C (2015) Data summarization techniques for big data—a survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Handbook on Data Centers, 1109–1152.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Hochbaum DS, Shmoys DB (1985) A best possible heuristic for the K-Center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Mathematics of Operations Research 10(2):180–184.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Horst R, Tuy H (2013) Global optimization: Deterministic approaches (Springer Science & Business Media).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Hua K, Shi M, Cao Y (2021) A scalable deterministic global optimization algorithm for clustering problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' International Conference on Machine Learning, 4391–4401.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ilhan T, Pinar MC (2001) An efficient exact algorithm for the vertex p-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Preprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Available: http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='ie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='bilkent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='tr/mustafap/pubs .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Kaufman L, Rousseeuw PJ (2009) Finding Groups in Data: an Introduction to Cluster Analysis (John Wiley & Sons).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Kleindessner M, Awasthi P, Morgenstern J (2019) Fair k-center clustering for data summarization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Interna- tional Conference on Machine Learning, 3448–3457.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Lim A, Rodrigues B, Wang F, Xu Z (2005) K-Center problems with minimum coverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Theoretical Computer Science 332(1):1–17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 27 Miheliˇc J, Robic B (2005) Solving the k-center problem efficiently with a dominating set algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Journal of computing and information technology 13:225–234.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Minieka E (1970) The m-Center Problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' SIAM Review 12(01).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Mladenovi´c N, Labb´e M, Hansen P (2003) Solving the p-Center problem with Tabu search and variable neighborhood Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Networks 42(1):48–64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Padberg M, Rinaldi G (1991) A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' SIAM review 33(1):60–100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Plesn´ık J (1987) A heuristic for the p-center problems in graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Discrete Applied Mathematics 17(3):263– 268.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Pullan W (2008) A memetic genetic algorithm for the vertex p-center problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Evolutionary Computation 16(3):417–436.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Schneider T (2015) Analyzing 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 billion NYC taxi and uber trips with a vengeance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' URL https:// toddwschneider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='com/posts/analyzing-1-1-billion-nyc-taxi-and-uber-trips-with-a-vengeance/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Shi M, Hua K, Ren J, Cao Y (2022) Global optimization of K-Center clustering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Proceedings of the 39th International Conference on Machine Learning, 19956–19966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Wang E, Cai G, Ballachay R, Cao Y, Trajano HL (2022) Predicting xylose yield in prehydrolysis of hard- woods: a machine learning approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Frontiers in Chemical Engineering 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 28 Table 1 Serial results on synthetic datasets Dataset Sam ple Dimen sion Method K=3 K=5 K=10 UB Nodes Gap (%) Time (s) UB Nodes Gap (%) Time (s) UB Nodes Gap (%) Time (s) Syn-300 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+2 2 FFT 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='68 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='33 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='88 CPLEX 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='75 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='9E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 29 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='14 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+7 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4 4h 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+7 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='75 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 46 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='14 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+6 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2 4h 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='64 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='75 17 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 13 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='14 1,764 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 15 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='31 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 38 Syn-1200 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+3 2 FFT 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='34 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='46 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='49 CPLEX 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='81 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6 4h 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='29 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8 4h 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='32 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='81 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+6 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 3,609 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='29 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+6 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5 4h 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='81 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='81 23 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11 14 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='29 411 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 15 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='51 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 148 Syn-2100 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1E+3 2 FFT 106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='50 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='70 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='04 CPLEX 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='10 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2 4h 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='32 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='26 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+6 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 11,606 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='58 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+6 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8 4h 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='78 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='10 17 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11 13 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='58 455 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 16 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='65 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='9E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 725 Syn-42000 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+4 2 FFT 161.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='98 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='12 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='21 CPLEX No feasible solution No feasible solution No feasible solution BB+CF 142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='33 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7E+5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7 4h 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='40 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+5 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 4h 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='24 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+4 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='33 103 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 21 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='77 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+3 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 363 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='29 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+4 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 4h Syn-210000 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1E+5 2 FFT 175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='81 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='78 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='79 CPLEX No feasible solution No feasible solution No feasible solution BB+CF 168.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='57 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+4 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='02 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+4 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8 4h 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='73 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+4 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 168.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='57 5 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11 21 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='88 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+3 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 1,118 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='48 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+4 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2 4h 1 Can assign K initial seeds through FFT at the root node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' BB+CF+BT results without this superscript means can not assign initial seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 29 Table 2 Serial results on small-scale datasets (S ≤ 1,000) Dataset Sam ple Dimen sion Method K=3 K=5 K=10 UB Nodes Gap (%) Time (s) UB Nodes Gap (%) Time (s) UB Nodes Gap (%) Time (s) iris 150 4 FFT 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='65 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='95 CPLEX 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+5 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 46 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='54 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+6 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='21 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+7 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 17 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='20 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1E+6 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 5,472 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='74 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='04 1 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11 12 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='20 409 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='66 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6E+5 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8 4h seeds 210 7 FFT 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='17 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='01 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='48 CPLEX 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='44 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+6 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 542 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='61 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 4h 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='48 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='44 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+3 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 17 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='22 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7E+6 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3 4h 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='51 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='44 21 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11 13 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='22 1,444 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 15 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='92 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1E+5 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 569 glass 214 9 FFT 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='52 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='28 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='73 CPLEX Out of memory Out of memory Out of memory BB+CF 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='52 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6E+3 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 15 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='44 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7E+5 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 1,522 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='64 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='52 191 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 13 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='44 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+3 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 17 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='95 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7E+6 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 9,180 BM 249 6 FFT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='52E+04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='12E+04 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='33E+03 CPLEX No feasible solution 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='48E+04 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='63E+04 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='05E+04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 22 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='32E+03 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+6 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='01E+03 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='05E+04 63 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11 13 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='32E+03 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 29 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='98E+03 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7E+5 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='9 4h UK 258 5 FFT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='42 CPLEX Out of memory Out of memory Out of memory BB+CF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='53 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+5 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 258 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='43 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+6 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='9 4h 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='33 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='53 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='43 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='9E+5 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='9 4h 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='31 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1E+5 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3 4h HF 299 12 FFT 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='69E+10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='17E+10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='68E+09 CPLEX No feasible solution No feasible solution No feasible solution BB+CF 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='72E+10 339 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='02E+10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 44 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='52E+09 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='72E+10 1 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11 12 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='02E+10 557 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 14 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='44E+09 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+6 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2 4h Who 440 8 FFT 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='58E+09 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='18E+09 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='81E+08 CPLEX No feasible solution No feasible solution No feasible solution BB+CF 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='49E+09 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+3 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 15 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11E+09 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7E+5 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 341 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='27E+08 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='49E+09 375 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 14 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11E+09 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+3 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 16 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='21E+08 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h HCV 602 12 FFT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='75E+05 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='38E+04 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='03E+04 CPLEX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='41E+05 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+5 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 3,720 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='73E+04 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='47E+04 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='41E+05 291 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 10 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='37E+04 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='2E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 76 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='36E+04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+6 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='41E+05 39 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 13 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='37E+04 583 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 15 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='16E+04 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6E+5 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 6,300 Abs 740 21 FFT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='94E+04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='19E+04 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='81E+03 CPLEX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='72E+04 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+5 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='02E+04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7E+4 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h No feasible solution BB+CF 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='39E+04 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 153 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='93E+03 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5E+6 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='9 4h 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='15E+03 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='39E+04 611 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 15 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='92E+03 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 178 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='37E+03 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6E+5 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3 4h TR 980 10 FFT 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='32 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='42 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='41 CPLEX 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='32 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6E+6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='5 4h 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='82 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='70 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+4 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='94 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='4E+5 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 2,953 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='49 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+6 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='7 4h 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='69 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='94 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='3E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 83 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='49 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+6 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6 4h 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='73 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+5 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='9 4h SGC 1,000 21 FFT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='33E+07 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='08E+06 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='50E+05 CPLEX 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='45E+06 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0E+4 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='56E+08 10 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h No feasible solution BB+CF 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='45E+06 411 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 12 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='91E+06 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+4 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='1 185 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='50E+05 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='6E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h BB+CF+BT 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='45E+06 1 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11 12 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='91E+06 1 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='11 12 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='50E+05 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='8E+5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content='0 4h 1 Can assign K initial seeds through FFT at the root node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' BB+CF+BT results without this superscript means can not assign initial seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfRPZI/content/2301.00061v1.pdf'} +page_content=' : Global Optimization for K-Center of One Billion Samples 30 Table 3 Serial results on large-scale datasets (1,000