
Ballpolyhedra are intersections of finitely many congruent balls in Euclidean space. The ballpolyhedron is called a "fat" one, if it contains the centers of its generating balls. The core part of this talk is an extension of Schramm's theorem and its proof on illuminating convex bodies of constant width to the family of "fat" ballpolyhedra.
A fundamental domain of geometric object X is a minimal subset D such that the object can be covered by isomorphs of D under the natural symmetry group of X. For a convex polyhedral cone we have the property that every orbit of extreme rays has exactly one representative in a given fundamental domain. In this talk I will present some ideas and preliminary experiments for computing orbits of extreme rays of convex cones, via computing (approximately) a fundamental domain of the cone.
Halfspace depth (or Tukey depth) is one of the most commonly studied data depth measures because it enjoys many desirable properties for data depth functions (in both the sample and continuous cases). The data depth contours bound regions of increasing depth. For the sample case, there are two competing definitions of contours: the rankbased contours and the coverbased contours.
In this talk, I will present three dynamic algorithms for maintaining the halfspace depth of points and contours: the first maintains the halfspace depth of a single point in a data set in O(logn) time per update (insertion/deletion) and overall linear space. A corollary of this result will be a strong structural result on the behavior of dynamic coverbased contours near data points, which is of independent interest. By maintaining a data structure for each data point, I will present an algorithm for dynamically maintaining the rankbased contours in O(n·logn) time per update and overall quadratic space. Finally, the third dynamic algorithm maintains the coverbased contours in O(n·log^{2} n) time per update and overall quadratic space.
A Monte Carlo approximation algorithm for the Tukey depth problem in high dimensions is introduced. The algorithm is a generalization of an algorithm presented by Rousseeuw and Struyf (1998). The performance of this algorithm is studied both analytically and experimentally.
A point p ∈ R^{d} has simplicial depth q relative to a set S if it is contained in q closed simplices generated by (d+1) sets of S. More generally, we consider colourful simplicial depth, where the single set S is replaced by (d+1) sets, or colours, S_{1},...,S_{d+1}, and the colourful simplices containing p are generated by taking one point from each set. Assuming that the convex hulls of the S_{i}'s contain p in their interior, Bárány's colourful Carathéodory's Theorem (1982) shows that p must be contained in some colourful simplex. We are interested in determining the minimum number of colourful simplices that can contain p for sets satisfying these conditions. That is, we would like to determine μ(d), the minimum number of colourful simplices drawn from S_{1},...,S_{d+1} that contain p ∈ R^{d} given that p ∈ int( conv(S_{i}) ) for each i. Without loss of generality, we assume that the points in ∪_{i} S_{i} ∪{p} are in general position. The quantity μ(d) was investigated in [3], where it is shown that 2d ≤ μ(d) ≤ d^{2}+1, that μ(d) is even for odd d, and that μ(2)=5. This paper also conjectures that μ(d) = d^{2}+1 for all d ≥ 1. Subsequently, Bárány and Matousek (2007) verified the conjecture for d=3 and provided a lower bound of μ(d) ≥ max(3d, [(d(d+1))/5] ) for d ≥ 3, while Stephen and Thomas (2008) independently provided a lower bound of μ(d) ≥ [((d+2)^{2})/4] . We show that for d ≥ 1, we have μ(d) ≥ [((d+1)^{2})/2] . This strengthens the previously known lower bounds for all d ≥ 4.
Joint work with Tamon Stephen (Simon Fraser) and Feng Xie (McMaster).
Let ∆(d,n) be the maximum possible edge diameter over all ddimensional polytopes defined by n inequalities. The Hirsch conjecture, formulated in 1957, suggests that ∆(d,n) is no greater than n−d. No polynomial bound is currently known for ∆(d,n), the best one being quasipolynomial due to Kalai and Kleitman in 1992. Goodey showed in 1972 that ∆(4,10)=5 and ∆(5,11)=6, and more recently, Bremner and Schewe showed ∆(4,11) = ∆(6,12) = 6. In this followup, we show that ∆(4,12)=7 and present strong evidence that ∆(5,12) = ∆(6,13) = 7.
Joint work with David Bremner, Antoine Deza and Lars Schewe.
The Hirsch conjecture was posed in 1957 in a letter from Warren M. Hirsch to George Dantzig. It states that the graph of a ddimensional polytope with n facets cannot have diameter greater than n−d. Despite being one of the most fundamental, basic and old problems in combinatorial geometry and optimization, what we know is quite scarce. Most notably, not even a polynomial upper bound is known for the diameters that are conjectured to be linear. In contrast, very few polytopes are known where the bound n−d is attained. We survey results on both the positive and on the negative side of the conjecture.
While much is known about the structural properties of the cut cone and polytope and their relaxations the metric cone and polytope, little is known about the directed cut equivalents.
The directed cut polytope (resp. cone) is the convex hull (resp. positive hull) of the arc incidence vectors of the directed cuts of the complete digraph. We present some structural properties of the directed cut cone and polytope, and their natural relaxations, the directed metric cone and polytope. In particular, it is shown that the directed cut polytope (resp. cone) is the convex (resp. positive) hull of two cut polytopes (resp. cones). New facets of the directed cut polytope are presented based on bijections from well known facets of the cut polytope.
Structural results on the projection of the directed cut polytope and cone onto the arc set of an arbitrary digraph G will also be presented. In particular facet preserving operations will be discussed, including directed version of triangular elimination and zerolifting.
The space of metric phylogenetic trees is a polyhedral complex, as constructed by Billera, Holmes, and Vogtmann (2001). This space is also nonpositively curved, so there is a unique shortest path between any two trees and a welldefined notion of an average or mean trees for a given set of trees. Finding either of these is a convex optimization problem. We will describe how a polynomial time algorithm for computing the shortest path can be used in computing the mean tree. We will also discuss how the mean tree can be used in some applications, such as reconstructing species trees from gene trees and comparing the topology of blood vessels in the brain.
Joint work with Ezra Miller (Duke) and Scott Provan (UNC).
Branchandbound is a classical method to solve integer programming feasibility problems. On the theoretical side, it is considered inefficient: it can provably take an exponential number of nodes to prove the infeasibility of a simple integer program. In this work we show that branchandbound is theoretically efficient, if we apply a simple transformation in advance to the constraint matrix of the problem which makes the columns short and near orthogonal. We analyze two such reformulation methods, called the rangespace and the nullspace methods. We prove that if the coefficients of the problem are drawn from {1,...,M} for a sufficiently large M, then for almost all such instances the number of subproblems that need to be enumerated by branchandbound is at most one. Besides giving an analysis of branchandbound, our main result generalizes results of Lagarias and Odlyzko, Frieze, and Furst and Kannan on the solvability of subset sum problems to bounded integer programs.
We give some numerical values of M which make sure that 99 percent of the reformulated problems solve at the rootnode. These values turned out to be surprisingly small for moderate values of n (the number of variables), and m (the number of constraints).
We also report the results of a computational study showing that branchandbound can efficiently solve subset sum problems with huge coefficients, that arise from cryptographic applications.
Joint work with Mustafa Tural at IMA, and Erick B. Wong at UBC.
Over the past twenty years, statisticians have developed the concept of data depth as a method of multivariate data analysis that requires no prior assumptions on the probability distribution of the data and that handles outliers. Proposed data depth metrics are inherently geometric, with a numeric value assigned to each data point that represents its centrality within the given data set. It is then possible to create depth contours that bound the sets of points all having a depth higher than some set of fixed thresholds and use these contours for analysis and visualization of the (presumably large) data set.
Data depth remains a relatively new field. A number of data depth measures have been defined and analysed, and new data depth measures continue to be proposed. Algorithmic and combinatorial techniques from computational geometry are used to develop more efficient tools for datadepth analysis. This talk will provide an overview of some of the standard data depth measures such as halfspace depth, simplicial depth, regression depth, L_{1} depth, and proximity graph depth. It will also discuss a set of desirable properties for data depth functions, describe enhancements of some of the standard measures to address some previous weaknesses, and provide a framework for evaluating current and future depth functions.
A simplex spanned by a colored point set S in Euclidean dspace is colorful if all vertices have distinct colors. The union of all fulldimensional colorful simplices is the colorful union, denoted by U(S). We show that the maximum combinatorial complexity of the colorful union for n points in dspace is Ω(n^{(d−1)2}). We prove several structural properties of the colorful union. In particular, U(S) is the union of d+1 starshaped polyhedra, which leads to efficient data structures for point inclusion queries in U(S). To illustrate the difficulty of working with the colorful union, we construct colored point sets S of size n in 3space with some pathological features.
Joint work with André Schulz (MIT).
It is known that in the Minkowski sum of r polytopes in dimension d, with r < d, the number of vertices of the sum can potentially be as high as the product of the number of vertices in each summand. However, the number of vertices for sums of more polytopes was unknown so far.
In this talk, I study polytopes in general orientations, and I show that the number of faces of a sum of r polytopes in dimension d, with r ≥ d, is linearly related to the number of faces in sums of less than d of the summand polytopes. We can deduce from this exact formula a tight bound on the maximum possible number of vertices of the Minkowski sum of any number of polytopes in any dimension. In particular, the linear relation implies that a sum of r polytopes in dimension d, where summands have n vertices in total, has less than ((n)  (d−1)) vertices, even when r ≥ d.
Wireless sensor networks have many applications, e.g., in monitoring physical or environmental conditions (temperature, sound, vibration, pressure, battlefield surveillance, home automation, hospital patients, traffic control, etc.). The sensor network localization, SNL, problem consists of locating the positions of ad hoc wireless sensors, given only the distances between sensors that are within radio range and the positions of a subset of the sensors (called anchors). One main point is to view SNL as a (nearest) Euclidean Distance Matrix, EDM, completion problem that does not distinguish between the anchors and the sensors. We show that there are advantages for using the wellstudied EDM model. This problem can be relaxed to a weighted, nearest, (positive) semidefinite programming, SDP, completion problem. This relaxation is illconditioned in two ways. First, it is, implicitly, highly degenerate in the sense that the feasible set is restricted to a low dimensional face of the SDP cone. This means that the Slater constraint qualification fails. Second, nonuniqueness of the optimal solution results in large sensitivity to small perturbations in the data.
The degeneracy in the SDP arises from cliques in the graph of the SNL problem. We take advantage of the absence of the Slater constraint qualification and derive a preprocessing technique that solves the SNL problem. With exact data, we explicitly solve the corresponding SDP problem without using any SDP solver. We do this by finding explicit representations of the faces of the SDP cone corresponding to intersections of cliques of the SNL problem. For problems with noise, we first solve nearest matrix problems to get best EDM approximations.
Joint work with Nathan Krislock.