The publication data may not always be up to date, and some papers that are listed below as manuscripts or to appear may already have appeared. Check the publication list by following the links from the paper titles for the most complete and up-to-date information, including the on-line original versions by the publishers, and a link to a BibTeX entry which you can copy directly. (The more recent entries have the BibTeX link already on this page.)
We construct a probabilistic finite automaton (PFA) with 7 states and an input alphabet of 5 symbols for which the PFA Emptiness Problem is undecidable. The only input for the decision problem is the starting distribution. For the proof, we use reductions from special instances of the Post Correspondence Problem.
We also consider some variations: The input alphabet of the PFA can be restricted to a binary alphabet at the expense of a larger number of states. If we allow a rational output value for each state instead of a yes-no acceptance decision, the number of states can even be reduced to 6.
It is undecidable whether the language recognized by a probabilistic finite automaton is empty. Several other undecidability results, in particular regarding problems about matrix products, are based on this important theorem. We present three proofs of this theorem from the literature in a self-contained way, and we derive some strengthenings. For example, we show that the problem remains undecidable for a fixed probabilistic finite automaton with 11 states, where only the starting distribution is given as input.
Grid peeling is the process of repeatedly removing the convex hull vertices of the grid points that lie inside a given convex curve. It has been conjectured that, for a more and more refined grid, grid peeling converges to a continuous process, the affine curve-shortening flow, which deforms the curve based on the curvature. We prove this conjecture for one class of curves, parabolas with a vertical axis, and we determine the value of the constant factor in the formula that relates the two processes.
We survey a few strengthenings and generalizations of the Schwartz–Zippel Lemma and Alon's Combinatorial Nullstellensatz. These lemmas guarantee the existence of (a certain number of) nonzeros of a multivariate polynomial when the variables run independently through sufficiently large ranges.
We classify the finite groups of orthogonal transformations in 4-space, and we study these groups from the viewpoint of their geometric action, using polar orbit polytopes. For one type of groups (the toroidal groups), we develop a new classification based on their action on an invariant torus, while we rely on classic results for the remaining groups.
As a tool, we develop a convenient parameterization of the oriented great circles on the 3-sphere, which leads to (oriented) Hopf fibrations in a natural way.
A face in a curve arrangement is called popular if it is bounded by the same curve multiple times. Motivated by the automatic generation of curved nonogram puzzles, we investigate possibilities to eliminate popular faces in an arrangement by inserting a single additional curve. This turns out to be already NP-hard; however, we present a probabilistic FPT-approach in the number of such faces.
We provide almost tight bounds on the minimum and maximum possible numbers of compositions of two polycubes of given total size, in two and higher dimensions. In the plane, we reach a near-quadratic number of compositions. We also provide an efficient algorithm (with some trade-off between time and space) for computing the number of composition of two given polyominoes (or polycubes).
In a hypergraph with vertex set V and edge set E, a real-valued function f: V→[0, 1] is a fractional transversal if the sum of f(v) over all vertices v in an edge e is ≥1 for every edge e∈E. Its size |f| is the sum of f(v) over all vertices v∈V. The fractional transversal number is the smallest possible |f|. We consider a game scenario where two players with opposite goals construct a fractional transversal incrementally, trying to minimize and maximize |f|, respectively. We prove that both players have strategies to achieve their common optimum, and they can reach their goals using rational weights.
What is the maximum number of intersections of the boundaries of a simple m-gon and a simple n-gon, assuming general position? This is a basic question in combinatorial geometry, and the answer is easy if at least one of m and n is even: If both m and n are even, then every pair of sides may cross and so the answer is mn. If exactly one polygon, say the n-gon, has an odd number of sides, it can intersect each side of the m-gon at most n−1 times; hence there are at most mn−m intersections. It is not hard to construct examples that meet these bounds. If both m and n are odd, the best known construction has mn−(m+n)+3 intersections, and it is conjectured that this is the maximum. However, the best known upper bound is only mn − (m+⌈n/6⌉), for m≥n. We prove a new upper bound of mn − (m+n) + C for some constant C, which is optimal apart from the value of C.
We discuss a piecewise linear (PL) analogue of Morse theory for PL manifolds. There are several notions of regular and critical points. A point is homologically regular if the homology does not change when passing through its level, it is strongly regular if the function can serve as one coordinate in a chart. Several criteria for strong regularity are presented. In particular we show that in low dimensions d≤4 a homologically regular point on a PL d-manifold is always strongly regular. Examples show that this fails to hold in higher dimensions d≥5. One of our constructions involves an 8-vertex embedding of the dunce hat into a polytopal 4-sphere with 8 vertices such that a regular neighborhood is Mazur's contractible 4-manifold. Finally, decidability questions in this context are discussed.
We study the Hitting Set and Feedback Vertex Set problems through the paradigm of finding diverse collections of r solutions of size at most k each, which has recently been introduced to the field of parameterized complexity [Baste et al., 2019]. We use two measures for the diversity of such a collection: the sum of all pairwise Hamming distances, and the minimum pairwise Hamming distance. We show that both problems are FPT in k+r for both diversity measures. A key ingredient in our algorithms is a problem-independent network flow formulation that, given a set of base solutions, computes a maximally diverse collection of solutions. This could be of independent interest.
See ErrataWe present a linear-time algorithm for finding the quadrilateral of largest area contained in a convex polygon, and we show that it is closely related to an old algorithm for the smallest enclosing parallelogram of a convex polygon.
The algorithms are given in pseudocode in Appendices A–C, and they have been implemented in Python.
A polyiamond is an edge-connected set of cells on the triangular lattice. If T(n) denotes the number of distinct (up to translation) polyiamonds made of n cells. It is known that the limit λT of T(n+1)/T(n) (the growth constant) exists. In this paper, we improve the best upper bound on λT from 4 to 3.6108.
The proof relies on a computer calculation, which was done in the Sage system with a simple script.
Given three points in the plane, we construct the plane geometric network of smallest geometric dilation that connects them. The geometric dilation of a plane network is defined as the maximum dilation (distance along the network divided by Euclidean distance) between any two points on its edges. The optimum network is either a line segment, a Steiner tree, or a curve consisting of two straight edges and a segment of a logarithmic spiral.
An alternating-current network is an undirected graph in which each edge is weighted by a conductance, a complex number with positive real part. Some nodes are designated as boundary nodes. The response map is the linear map from the vector of voltages at the boundary nodes to the vector of currents flowing into the network through these nodes.
We prove that the known necessary conditions for a linear map to be a response map are sufficient, and we show how to construct an appropriate alternating-current network for a given response map.
See Note on PriorityWe give a new combinatorial proof for the number of convex polyominoes whose minimum enclosing rectangle has given dimensions. We also count the subclass of these polyominoes that contain the lower left corner of the enclosing rectangle (directed polyominoes). We indicate how to sample random polyominoes in these classes. As a side result, we calculate the first and second moments of the number of common points of two monotone lattice paths between two given points.
We examine how the measure and the number of vertices of the convex hull of a random sample of n points from an arbitrary probability measure in d-dimensional space relates to the wet part of that measure. This extends classical results for the uniform distribution from a convex set [Bárány and Larman 1988]. The lower bound of Bárány and Larman continues to hold in the general setting, but the upper bound must be relaxed by a factor of log n. We show by an example that this is tight.
The preprint on my homepage contains as an additional appendix a self-contained proof of the ε-net lemma from Komlós, Pach, and Woeginger [1992], which is used to show the existence of ε-nets.
We study the following separation problem: Given a collection of colored objects in the plane, compute a shortest "fence" F, i.e., a union of curves of minimum total length, that separates every two objects of different colors. Two objects are separated if F contains a simple closed curve that has one object in the interior and the other in the exterior. We refer to the problem as GEOMETRIC k-CUT, where k is the number of different colors, as it can be seen as a geometric analogue to the well-studied multicut problem on graphs. We first give an O(n4 log3n)-time algorithm that computes an optimal fence for the case where the input consists of polygons of two colors and n corners in total. We then show that the problem is NP-hard for the case of three colors. Finally, we give a (2−4/(3k))-approximation algorithm.
For a given sequence of n numbers, we want to find a monotonically increasing sequence of the same length that best approximates it in the sense of minimizing the weighted sum of absolute values of the differences. A conceptually easy dynamic programming approach leads to an algorithm with running time O(n log n). While other algorithms with the same running time are known, our algorithm is very simple. The only auxiliary data structure that it requires is a priority queue. The approach extends to other error measures.
A tree with n vertices has at most 95n/13 minimal dominating sets. The growth constant λ=13√95 ≈ 1.4194908 is best possible. It is obtained in a semi-automatic way as a kind of "dominant eigenvalue" of a bilinear operation on sixtuples that is derived from the dynamic-programming recursion for computing the number of minimal dominating sets of a tree. The core of the method tries to enclose a set of sextuples in a six-dimensional geometric body with certain properties, which depend on some putative value of λ. This technique is generalizable to other counting problems, and it raises questions about the "growth" of a general bilinear operation.
We also derive an output-sensitive algorithm for listing all minimal dominating sets with linear set-up time and linear delay between successive solutions.
The upper-bound proof on λ requires a computer calculation, for which a Python program is provided.
We show that if a planar graph G has a plane straight-line drawing in which a subset S of its vertices are collinear, then for any set X of points in the plane with |X| = |S|, there is a plane straight-line drawing of G in which the vertices in S are mapped to the points in X. This solves an open problem posed by Ravsky and Verbitsky in 2008. In their terminology, we show that every collinear set is free.
This result has applications in graph drawing, including untangling, column planarity, universal point subsets, and partial simultaneous drawings.
Our goal is to compare two planar point sets by finding subsets of a given size such that a minimum-weight matching between them has the smallest weight. This can be done by a translation of one set that minimizes the weight of the matching. We give efficient algorithms
We consider the problem of sandwiching a polytope Δ with a given number k of vertices between two nested polytopes P and Q in d dimensions. The polytope P is not necessarily full-dimensional.
Besides the question of (a) computing Δ, we study the following question: (b) Assuming that the given polytopes P and Q are rational polytopes (polytopes with rational vertex coordinates), does it suffice to look for Δ among the rational polytopes?
This problem has several applications: (1) When Q is a dilation of P (or an offset of P), Δ can serve as a thrifty approximation of P. (2) The polytope nesting problem can model the nonnegative rank of a matrix, and thereby the extension complexity of polytopes, as well as other problems in statistics and communication complexity. It was in this context that question (b) was first asked.
The Koebe–Andreev–Thurston Circle Packing Theorem states that every triangulated planar graph has a circle-contact representation. The theorem has been generalized in various ways. The arguably most prominent generalization assures the existence of a primal-dual circle representation for every 3-connected planar graph. The aim of this note is to give a streamlined and self-contained elementary proof of this result.
Monsky's theorem from 1970 states that a square cannot be dissected into an odd number of triangles of the same area, but it does not give a lower bound for the area differences that must occur.
We extend Monsky's theorem to constrained framed maps; based on this we can apply a gap theorem from semi-algebraic geometry to a polynomial area difference measure and thus get a lower bound for the area differences that decreases doubly-exponentially with the number of triangles. On the other hand, we obtain the first superpolynomial upper bounds for this problem, derived from an explicit construction that uses the Thue–Morse sequence.
A bipartite graph G=(U,V,E) is convex if the vertices in V can be linearly ordered such that for each vertex u in U, the neighbors of u are consecutive in the ordering of V. An induced matching H of G is a matching such that no edge of E connects endpoints of two different edges of H.
We show that in a convex bipartite graph with n vertices and m weighted edges, an induced matching of maximum total weight can be computed in optimal O(n+m) time.
An unweighted convex bipartite graph has a representation of size O(n) that records for each vertex u in U the first and last neighbor in the ordering of V. Given such a compact representation, we compute an induced matching of maximum cardinality in O(n) time.
In convex bipartite graphs, maximum-cardinality induced matchings are dual to minimum chain covers. A chain cover is a covering of the edge set by chain subgraphs, that is, subgraphs that do not contain induced matchings of more than one edge. Given a compact representation, we compute a representation of a minimum chain cover in O(n) time. If no compact representation is given, the cover can be computed in O(n+m) time.
The best previous algorithms for computing a maximum-cardinality induced matching or a minimum chain cover had a running time of O(n2).
We introduce and study the problem Ordered Level Planarity, which asks for a planar drawing of a graph such that vertices are placed at prescribed positions in the plane and such that every edge is realized as a y-monotone curve. This can be interpreted as a variant of Level Planarity in which the vertices on each level appear in a prescribed total order. We show NP-completeness even for the special case that the number of vertices on each level is bounded by 2 and that the maximum degree is 2. This establishes a tight border of tractability, since the problem is easy if there is at most one vertex per level.
Our result has applications to other problems, such as T-Level Planarity, Clustered Level Planarity, and Constrained Level Planarity. We strengthen previous hardness results and solve open questions that were posed by Fulek, Pelsmajer, Schaefer and Štefankovič (2015) and by Angelini, Da Lozzo, Di Battista, Frati and Roselli (2013). In particular, our result implies that a published polynomial algorithm by Katz, Krug, Rutter and Wolff (GD'09) for Manhattan Geodesic Planarity is incorrect unless P=NP.
We show that any d-colored set of points in general position in d dimensions can be partitioned into n subsets with disjoint convex hulls such that the set of points and all color classes are partitioned as evenly as possible. This extends results by Holmsen, Kynčl, and Valculescu (2017) and establishes a central case of their general conjecture. Our proof utilizes a result of Soberón (2012) on simultaneous equipartitions of d continuous measures in d-space by n convex regions, which gives a convex partition with the desired properties, except that points may lie on the boundaries of the regions. In order to resolve the ambiguous assignment of these points, we set up a network flow problem. The equipartition of the continuous measures gives a fractional flow. The existence of an integer flow then yields the desired partition of the point set.
Given a set of points in the plane, we want to establish a connection network between these points that consists of several disjoint layers. Motivated by sensor networks, we want that each layer is spanning and plane, and that no edge is too long when compared to the length needed to obtain a spanning tree.
We consider two different approaches. First we show an almost optimal centralized approach to extract two layers. Then we consider a distributed model in which each point can compute its adjacencies using only information about vertices at most a predefined distance away. We obtain a constant-factor approximation with respect to the length of the longest edge in the graphs. In both cases the obtained layers are plane.
We give new algorithms for generating all n-tuples over an alphabet of m letters, changing only one letter at a time (Gray codes). These algorithms are based on the connection with variations of the Towers of Hanoi game. Our algorithms are loopless, in the sense that the next change can be determined in a constant number of steps, and they can be implemented in hardware. We also give another family of loopless algorithms that is based on the idea of working ahead and saving the work in a buffer.
The appendix of the conference version (1.) contains, as additional material, prototype simulations of the most important algorithms in the programming language Python. The appendix of the arXiv preprint (2.) contains simulations of all algorithms of the paper. Program source code is contained with the source files of the paper. The journal version (3.) contains several additional sections that discuss the results in a larger context.
See Note on the python programs in the appendixWe are given a graph with n vertices V={1,…,n}. On each vertex, a distict token from the set {T1,…,Tn} is placed. A swap is an exchange of tokens between adjacent vertices. We consider the algorithmic question of finding a shortest sequence of swaps such that each token Ti is on vertex i. We are able to achieve essentially matching upper and lower bounds, for exact algorithms and approximation algorithms. For exact algorithms, we rule out 2o(n) algorithms under the Exponential Time Hypothesis (ETH). This is matched with a simple 2O(n log n) algorithm based on exploring the state space. We give a general 4-approximation algorithm and show APX-hardness. Thus, there is a constant δ>1 such that every polynomial-time approximation algorithm has approximation factor at least δ.
Our results also hold for a generalized version, where tokens and vertices are colored. In this generalized version each token must go to a vertex with the same color.
We give a deterministic O(n log n)-time algorithm to decide if two n-point sets in 4-dimensional Euclidean space are the same up to rotations and translations.
A lower bound of Ω(n log n) has been known, and it has been conjectured that O(n log n) algorithms should exist for any fixed dimension. The best algorithms in d-space so far are are a deterministic algorithm by Brass and Knauer (2000) and a randomized Monte Carlo algorithm by Akutsu (1998). In 4-space, they take time O(n2log n) and O(n3/2log n), respectively.
Our algorithm exploits the geometric structure and the properties of 4-dimensional space, such as angles between planes, packing numbers, Hopf fibrations, Plücker coordinates, the classification of Coxeter groups, and rotations in 4-space.
I will survey algorithms for testing whether two point sets are congruent, that is, equal up to an Euclidean isometry. I will introduce the important techniques for congruence testing, namely dimension reduction and pruning, or more generally, condensation. I will illustrate these techniques on the three-dimensional version of the problem, and indicate how they lead for the first time to an algorithm for four dimensions with near-linear running time (joint work with Heuna Kim). On the way, we will encounter some beautiful and symmetric mathematical structures, like the regular polytopes, and Hopf-fibrations of the three-dimensional sphere in four dimensions.
We consider the piecewise linear approximation of saddle functions of the form f(x,y) = ax2-by2 under the Linfinity error norm. We show that interpolating approximations are not optimal. One can get slightly smaller errors by allowing the vertices of the approximation to move away from the graph of the function.
Windrose planarity asks for planar drawings of a graph, where every edge is monotone both in in the x and the y-direction. Moreover, for each edge uv it is specified in which quadrant (NE, NW, SW, or SE) v lies relative to u. While the well-known notion of upward planarity can be used to visualize a partial order, windrose planarity simultaneously visualize two partial orders defined by the edges of the same graph by exploiting both the horizontal and the vertical relationship among vertices.
Testing whether there exists a windrose-planar drawing is NP-hard in the general case. We give a polynomial-time algorithm for the case when the desired combinatorial embedding is specified as part of the input. This algorithm is based on a characterization of the plane triangulations admitting a windrose-planar drawing. Furthermore, for any embedded graph admitting a windrose-planar drawing we show how to construct one with at most one bend per edge on an O(n)×O(n) grid. This contrasts with the fact that straight-line windrose-planar drawings may require exponential area.
A simple topological graph is a topological graph in which any two edges have at most one common point, which is either a common endpoint or a proper crossing. More generally, in a k-simple topological graph, every pair of edges has at most k common points of this kind. We construct saturated simple and 2-simple graphs with few edges. These are k-simple graphs in which no further edge can be added. We improve the previous bounds of Kynčl, Pach, Radoičić, and Tóth (2013) and show that there are saturated simple graphs on n vertices with 7n edges and saturated 2-simple graphs on n vertices with 14.5n edges. As a consequence, 14.5n edges is also a new upper bound for k-simple graphs (considering all values of k). We also construct saturated simple and 2-simple graphs that have some vertices with low degree.
We show how to preprocess a polygonal domain with a fixed starting point s in order to answer efficiently the following queries: Given a point q, how should one move from s in order to see q as soon as possible? This query resembles the well-known shortest-path-to-a-point query, except that the latter asks for the fastest way to reach q, instead of seeing it. Our solution methods include a data structure for a different generalization of shortest-path-to-a-point queries, which may be of independent interest: to report efficiently a shortest path from s to a query segment in the domain.
Topi Talvitie has provided an interactive visualization of the visibility map for the problem.
The maximum number of non-crossing straight-line perfect matchings that a set of n points in the plane can have is known to be O(10.0438n) and Ω*(3n), where the Ω* notation hides polynomial factors in the aymptotic expression. The lower bound, due to García, Noy, and Tejel (2000), is attained by the double chain, which has Θ(3n/nΘ(1)) such matchings. We reprove this bound in a simplified way that uses the novel notion of down-free matchings. We then apply this approach to several other constructions. As a result, we improve the lower bound: First we show that the double zigzag chain with n points has Θ*(λn) non-crossing perfect matchings with λ≈3.0532. Next we analyze further generalizations of double zigzag chains: double r-chains. The best choice of parameters leads to a construction that has Θ*(μn) non-crossing perfect matchings with μ≈3.0930. The derivation of this bound requires an analysis of a coupled dynamic-programming recursion between two infinite vectors.
Moral: Don't count your boobies until they are hatched matched.
We consider the worst-case query complexity of some variants of certain PPAD-complete search problems. Suppose we are given a graph G and a vertex s in G. We denote the directed graph obtained from G by directing all edges in both directions by G'. D is a directed subgraph of G' which is unknown to us, except that it consists of vertex-disjoint directed paths and cycles and one of the paths originates in s. Our goal is to find an endvertex of a path by using as few queries as possible. A query specifies a vertex v in G and the answer is the set of the edges of D incident to v, together with their directions.
We also show lower bounds for the special case when D consists of a single path. Our proofs use the theory of graph separators. Finally, we consider the case when the graph G is a grid graph. In this case, using the connection with separators, we give asymptotically tight bounds as a function of the size of the grid, if the dimension of the grid is considered as fixed. For this, we prove a separator theorem about grid graphs, which is interesting on its own right.
We study plane straight-line embeddings of graphs where certain edges are specified to be longer than other edges. We analyze which graphs are universal in the sense that they allow a plane embedding for any total strict order on the edge lengths. In addition, we also briefly consider circular arc drawings with relative edge length specifications.
When comparing two different styles to draw a graph, one can wonder how much better or worse one style can be than another one, in the worst case, for some quality measure. We compare planar straight-line drawings with fixed and free embeddings, planar circular arc drawings, and non-planar straight-line drawings, and consider the quality measures angular resolution, area requirement, edge length ratio, and feature resolution.
A polyomino (or lattice animal) is an edge-connected set of squares on the two-dimensional square lattice. Counting polyominoes is an extremely hard problem in enumerative combinatorics, with important applications in statistical physics for modeling processes of percolation and collapse of branched polymers. We investigated a fundamental question related to polyominoes, namely, what is their growth constant, the asymptotic ratio between A(n + 1) and A(n) when n goes to infinity, where A(n) is the number of polyominoes of size n. This value is also known as "Klarner's constant" and denoted by λ. So far, the best lower and upper bounds on λ were roughly 3.98 and 4.65, respectively, and so not even a single decimal digit of λ was known. Using extremely high computing resources, we have shown (still rigorously) that λ>4.00253, thereby settling a long-standing problem: proving that the leading digit of λ is 4.
See NoteThe Fréchet distance between two curves is the maximum distance in a simultaneous traversal of the two curves. We refine this notion by not only looking at the maximum distance but at all other distances. Roughly speaking, we want to minimize the time T(s) during which the distance exceeds a threshold s, subject to upper speed constraints. We optimize these times lexicographically, giving more weight to larger distances s. For polygonal curves in general position, this criterion produces a unique monotone matching between the points on the two curves, which is important for applications like morphing, and we can compute this matching in polynomial time.
We generalize regular subdivisions (polyhedral complexes resulting from the projection of the lower faces of a polyhedron) introducing the class of recursively-regular subdivisions. Informally, a recursively-regular subdivision is a subdivision that can be obtained by splitting some faces of a regular subdivision by other regular subdivisions (and continuing recursively). We also define the finest regular coarsening and the regularity tree of a polyhedral complex. We prove that recursively-regular subdivisions are not necessarily connected by flips and that they are acyclic with respect to the in-front relation. We show that the finest regular coarsening of a subdivision can be efficiently computed, and that whether a subdivision is recursively regular can be efficiently decided. As an application, we also extend a theorem known since 1981 on illuminating space by cones and present connections of recursive regularity to tensegrity theory and graph-embedding problems.
Given k finite point sets in the plane, we are interested in finding one translation for each point set such that the union of the translated point sets is in convex position. We show that if k is part of the input, then it is NP-hard to determine if such translations exist, even when each point set consists of at most three points.
The original motivation of this problem comes from the question of whether a given triangulation T of a point set is the empty shape triangulation with respect to some (strictly convex) shape S. In other words, we want to find a shape S such that the triangles of T are precisely those triangles about which we can circumscribe a homothetic copy of S that does not contain any other vertices of T. This is the Delaunay criterion with respect to S; for the usual Delaunay triangulation, S is the circle.
We study the discrete Voronoi game, where two players alternately claim vertices of a graph for t rounds. In the end, the remaining vertices are divided such that each player receives the vertices that are closer to his or her claimed vertices. We prove that there are graphs for which the second player gets almost all vertices in this game, but this is not possible for bounded-degree graphs. For trees, the first player can get at least a quarter of the vertices, and we give examples where she can get only little more than a third of them. We make some general observations, relating the result with many rounds to the result for the one-round game on the same graph.
Let P be a set of n points in the plane. A topological hypergraph G on the set of points of P is a collection of simple closed curves in the plane that avoid the points of P. Each of these curves is called an edge of G, and the points of P are called the vertices of G. We provide bounds on the number of edges of topological hypergraphs in terms of the number of their vertices under various restrictions assuming the set of edges is a family of pseudo-circles.
We study a symmetric configuration of n+1 Gaussian kernels for which there are exactly n+2 modes. We show that all modes lie on a finite set of lines, which we call axes, and study the restriction of the Gaussian mixture to these axes in order to discover that there are an exponential number of critical points in this configuration. Although the existence of ghost modes remained unknown due to the difficulty of finding examples in the plane, we show that the resilience of ghost modes grows like the square root of the dimension. In addition, we exhibit finite configurations of isotropic Gaussian kernels with superlinearly many modes.
No such function p(k) exists in general. However, in the restricted case in which points only appear but never disappear, we give a coloring algorithm guaranteeing the property at any time with p(k)=3k-2. This can be interpreted as coloring point sets in the plane with k colors such that any bottomless rectangle containing at least 3k-2 points contains at least one point of each color. We also prove a lower bound p(k)>1.67k. Chen, Pach, Szegedy, and Tardos (2009) proved that such colorings do not always exist in the case of general axis-aligned rectangles. Our result also complements recent contributions of Keszegh and Pálvölgyi on cover-decomposability of octants (2011,2012).
subject Last update: July 19, 2013.An important objective in the choice of a triangulation of a point set is that the smallest angle becomes as large as possible. In the straight-line case, it is known that the Delaunay triangulation is optimal in this respect. We propose and study the concept of a circular arc triangulation, which offers additional flexibility for enlarging small angles. We show that angle optimization and related questions lead to a linear programming problem that can be solved by simple graph-theoretic methods.
We also show how to modify the method so that vertices become very close but not co-incident.
The proof involves a new visibility property of polygons, namely that every simple polygon has a visibility-increasing edge where, as a point travels from one endpoint of the edge to the other, the visibility region of the point increases.
This is a critical discussion of a certain methodological-mathematical aspect regarding the comparison of journals in the study of the use of citation counts and impact factors by Robert Adler, John Ewing, and Peter Taylor from 2008, that was commissioned by the International Mathematical Union, see http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf.
[math.MG].
A line g is a transversal to a family F of convex polytopes in 3-dimensional space if it intersects every member of F. If, in addition, g is an isolated point of the space of line transversals to F, we say that F is a pinning of g. We show that any minimal pinning of a line by convex polytopes such that no face of a polytope is coplanar with the line has size at most eight. If, in addition, the polytopes are disjoint, then it has size at most six. We completely characterize configurations of disjoint polytopes that form minimal pinnings of a line.
In: Proceedings of the 21st Annual Canadian Conference on Computational Geometry, Vancouver, August 17–19, 2009, pp. 87–90. (This is a preliminary short version with parts of the results.) →BibTeX
Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list) that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide a zero-space data structure that constructs the desired features on the fly by accessing the input. The whole geometric structure can be obtained by using this data structure to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step.
We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree in quadratic time per step, so that the whole tree can be found in cubic time using constant work-space.
Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete, this constrained problem can be solved in quadratic time using only constant work-space.
We study flip graphs of (pseudo-)triangulations whose maximum vertex degree is bounded by a constant k. In particular, we consider triangulations of sets of n points in convex position in the plane and prove that their flip graph is connected if and only if k>6; the diameter of the flip graph is O(n2). We also show that for general point sets, flip graphs of pointed pseudo-triangulations can be disconnected for k<10, and flip graphs of triangulations can be disconnected for any k.
As an application, we discuss how to compute the maximal locally convex function for a polygon whose corners lie on its convex hull. This extends results that were presented in the earlier paper A pointed Delaunay pseudo-triangulation of a simple polygon mentioned below.
Our studies are motivated by the construction of discrete Laplace-Beltrami operators.
Let S be a set of n points in general position in the plane. For each point of S we are given a parity constraint, telling whether it should be even or odd. We study how well such constraints can be satisfied by various classes of planar graphs on S. Specifically, we show that we can always find a plane tree, a two-connected outerplanar graph, or a pointed pseudotriangulation that satisfies all but at most three parity constraints. With triangulations, we can satisfy about 2/3 of all parity constraints.
For a polygon with holes, it is NP-complete to decide whether it has a triangulation that satisfies all parity constraints on the vertices.
[cs.CG].
We study the parameterized complexity of the following fundamental geometric problems with respect to the dimension d:
We show that the decision versions of all these problems are W[1]-hard when parameterized by the dimension d, and hence not solvable in O(f(d)nc) time, for any computable function f and constant c (unless FPT=W[1]). Our reductions also give an nΩ(d)-time lower bound (under the Exponential Time Hypothesis).
, August 13–15, 2008, pp. 131–134.
We consider different variants of this problem where N is either a general graph or restricted to a tree, and the subnetwork F that we are looking for is either a simple path, a path with self-intersections at vertices, or a tree. We give polynomial-time algorithms, NP-hardness and NP-completeness proofs, approximation algorithms, and also fixed-parameter tractable algorithms.
Our algorithm starts from an r-sample S of F. Based on S, a set of surface covering balls with maximal radii is calculated such that the topology is retained. From the weighted α-shape of a proper subset of these highly overlapping surface balls we get the desired polytope. As there is a range for the possible radii for the surface balls, the method can be used to construct triangular surfaces from point clouds in a scalable manner. We also briefly sketch how to combine parts of our algorithm with existing medial axis algorithms for balls, in order to compute stable medial axis approximations with scalable level of detail.
We present an algorithm for finding the smallest side length for 3 axis-aligned cubes that cover a given n-point set in d dimensions in O(6ddn log (dn)) time. This shows that the problem is fixed-parameter tractable when parameterized with d. On the other hand, using tools from parameterized complexity theory, we show that this is unlikely to be the case for 4 cubes, and with the k-center problem in the Euclidean metric, for any k>1. In particular, we prove that deciding whether a given d-dimensional set of n points can be covered by the union of 2 balls of given radius, or by 4 axis-aligned cubes of given side length, is W[1]-hard with respect to d, and thus not fixed-parameter tractable unless FPT=W[1].
[math.MG].
We show that the maximum cardinality of an anti-chain composed of intersections of a given set of n points in the plane with half-planes is close to n2. We approach this problem by establishing the equivalence with the problem of the maximum monotone path in an arrangement of n lines. For a related problem on antichains in families of convex pseudo-discs we can establish the precise asymptotic bound: it is quadratic in n. The sets in such a family are characterized as intersections of a given set of n points with convex sets, such that the difference between the convex hulls of any two sets is nonempty and connected.
Principal component analysis (PCA) is commonly used to compute a bounding box of a point set in Rd. The popularity of this heuristic lies in its speed, easy implementation and in the fact that usually, PCA bounding boxes approximate the minimum-volume bounding boxes quite well.
There are examples of discrete points sets in the plane showing that the worst case ratio tends to infinity. Thus, we consider PCA bounding boxes for continuous sets, especially for the convex hull of a point set. We contribute new upper bounds on the approximation factor of PCA bounding boxes of convex sets in R2 and R3.
[cs.CG].
We present a constructive method for embedding a 3-connected planar graph with n vertices as a 3-polytope with small integer coordinates. The embedding will need no coordinate greater than O(27.55n)=O(188n). Finding a plane embedding which supports an equilibrium stress is the crucial part in the construction. In order to guarantee that the size of the coordinates and the stresses are small, we extend Tutte's spring embedding method.
The lower bound extends to variants of the Fréchet distance such as the weak as well as the discrete Fréchet distance. For two polygonal curves in one dimension, we give a linear-time algorithm to compute their weak Fréchet distance.
This is an introduction to combinatorial topology, with an emphasis on subjects that are of interest for computational geometry in two and three dimensions. It covers the notions of homotopy and isotopy, simplicial homology, Betti numbers, and basic results from Morse Theory.
Meshing is the process of computing, for a given surface, a representation consisting of pieces of simple surface patches, like triangles. This survey discusses all currently known surface (and curve) meshing algorithms that come with correctness and quality guarantees.
Using a similar approach, we design an algorithm with a runtime of O(n2 log n), for computing a tangent-continuous approximation with the minimum number of biarcs, for a sequence of points with given tangent directions.
We consider the problem of finding obnoxious centers in graphs. For arbitrary graphs with n vertices and m edges, we give a randomized algorithm with with O(n log2n + m log n) expected time. For planar graphs, we give algorithms with O(n log n) expected time and O(n log3n) worst-case time. For graphs with bounded treewidth, we give an algorithm taking O(n log n) worst-case time. The algorithms make use of parametric search and several results for computing distances on graphs of bounded treewidth and planar graphs.
Here are the Python programs and data for these calculations.
See Note on priorityEvery three-connected planar graph with n vertices has a drawing on an O(n2)×O(n2) grid in which all faces are strictly convex polygons. More generally, there is a drawing on an O(nw) ×O(n3/w) grid, for any choice of a parameter w in the range 1<w<n.
These drawings are obtained by perturbing (not strictly) convex drawings on O(n) ×O(n) grids. Tighter bounds are obtained when the faces have fewer sides. In the proof, we derive, as an auxiliary result, that the right triangle (0,0),(a,0),(a,b) contains at least ab/4 primitive vectors, for every integer a≥1 and every b≥2.
This paper improves a conference paper by the second author with the same title, in which a weaker bound of O(n7/3)×O(n7/3) on the grid size had been established:
In: Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Vancouver, January 2005, pp. 728–734. →BibTeX
See Erratum
The proof relies on halving pairs, pairs of points dividing a given closed curve C in two parts of equal length, and their minimum and maximum distances h and H. Additionally, we analyze curves of constant halving distance (h=H), examine the relation of h to other geometric quantities and geometric problems (like Ulam's floating body problem) and prove some new dilation bounds.
In particular, we show that the multiple Fréchet distance can be bounded in terms of pairwise Fréchet distances, and we provide an example which shows the tightness of this bound.
Contour trees are used when high-dimensional data are preprocessed for efficient extraction of iso-contours for the purpose of visualization. So far, efficient algorithms for contour trees are based on processing the data in sorted order. We present a new algorithm that avoids sorting the whole data set, but sorts only the component-critical points. They form only a small fraction of the points, for typical data that arise in practice. The algorithm works in any dimension.
In addition, we provide a comprehensive discussion of critical and regular points of piecewise linear functions of two and three variables.
We study non-crossing frameworks in the plane for which the classical reciprocal on the dual graph is also non-crossing. We give a complete description of the self-stresses on non-crossing frameworks G whose reciprocals are non-crossing, in terms of: the types of faces (only pseudo-triangles and pseudo-quadrangles are allowed); the sign patterns in the stress on G; and a geometric condition on the stress vectors at some of the vertices.
As in other recent papers where the interplay of non-crossingness and rigidity of straight-line plane graphs is studied, pseudo-triangulations show up as objects of special interest. For example, it is known that all planar Laman circuits can be embedded as a pseudo-triangulation with one non-pointed vertex. We show that for such pseudo-triangulation embeddings of planar Laman circuits which are sufficiently generic, the reciprocal is non-crossing and again a pseudo-triangulation embedding of a planar Laman circuit. For a singular (non-generic) pseudo-triangulation embedding of a planar Laman circuit, the reciprocal is still non-crossing and a pseudo-triangulation, but its underlying graph may not be a Laman circuit. Moreover, all the pseudo-triangulations which admit a non-crossing reciprocal arise as the reciprocals of such, possibly singular, stresses on pseudo-triangulation Laman circuits.
All self-stresses on a planar graph correspond to liftings to piece-wise linear surfaces in 3-space. We prove characteristic geometric properties of the lifts of such non-crossing reciprocal pairs.
We also give a precise definition and analysis of "realistic" problem instances which are typical of data arising in practice and do not exhibit the theoretical worst-case behavior.
The solution is obtained through a graph-theoretic model.
We present three combinatorial bounds: the ratio of the size of minimum pseudotriangulation of a point set S and the size of minimal pseudotriangulation contained in a triangulation T of S, the ratio of the size of the best minimal pseudotriangulation and the worst minimal pseudotriangulation both contained in a given triangulation T. We also present a linear-time algorithm for finding a minimal pseudotriangulation contained in a given triangulation, and we study the minimum pseudotriangulation containing a given set of non-crossing line segments.
We address the problem of how to cover a set of required points by a small number of axis-parallel ellipses that avoid a second set of forbidden points. We study geometric properties of such covers and present an efficient randomized approximation algorithm for the cover construction. This question is motivated by a pattern recognition task where one has to identify ellipse-shaped protein spots in two-dimensional electrophoresis images.
) is the best possible.
[math.CO].
We introduce the polytope of pointed pseudo-triangulations of a point set in the plane, defined as the polytope of infinitesimal expansive motions of the points subject to certain constraints on the increase of their distances. Its 1-skeleton is the graph whose vertices are the pointed pseudo-triangulations of the point set and whose edges are flips of interior pseudo-triangulation edges.
For points in convex position we obtain a new realization of the associahedron, i.e., a geometric representation of the set of triangulations of an n-gon, or of the set of binary trees on n vertices, or of many other combinatorial objects that are counted by the Catalan numbers. By considering the 1-dimensional version of the polytope of constrained expansive motions we obtain a second distinct realization of the associahedron as a perturbation of the positive cell in a Coxeter arrangement.
Our methods produce as a by-product a new proof that every simple polygon or polygonal arc in the plane has expansive motions, a key step in the proofs of the Carpenter's Rule Theorem by Connelly, Demaine and Rote (2000) and by Streinu (2000).
Motivated by several open questions on triangulations and pseudotriangulations,we
give closed form expressions for the number of triangulations and the number
of minimum pseudo-triangulations of n points in wheel configurations,
that is, with n−1 in convex position.Although the numbers of triangulations
and pseudotriangulations vary depending on the placement of the interior
point, their difference is always the (n−2)nd Catalan
number.
We also prove an inequality #PT <= 3i#T for the
numbers of minimum pseudo-triangulations and triangulations of any point
configuration with i interior points.
We show the following two results on a set of n points in the plane, thus answering questions posed by Erdös and Purdy (1971): 1. The maximum number of triangles of maximum area (or of maximum perimeter) in a set of n points in the plane is exactly n. 2. The maximum possible number of triangles of minimum positive area in a set of n points in the plane is Θ(n2).
Thus, when the number of constraints is fixed, integer programming in two variables is not harder than greatest common divisor computation. Our algorithm avoids the binary search that is associated with the usual approach by letting the objective function slide into the polyhedron, until the lattice width of the truncated polyhedron equals the flatness constant from Khintchin's Flatness theorem.
This result is achieved in two steps. First we prove that the the classical Gaussian algorithm for ternary form reduction, in the variant of Lagarias, has this worst case running time. Then we show that, given a ternary form which is reduced in the Gaussian sense, it takes only a constant number of arithmetic operations and a constant number of binary-form reductions to fully reduce the form.
Finally we describe how this algorithm can be generalized to higher dimensions. Lattice basis reduction and shortest vector computation in fixed dimension d can be done with O(M(s)logd-1s) bit-operations.
See also Erik
Demaine's linkage page for information about related problems.
The special case of this problem where in every family the jobs have at most two different due dates is known to be NP-complete [Bruno and Downey 1978]. The main result of this paper is a polynomial time algorithm for the remaining open case where in every family all the jobs have the same due date. This case may be formulated as a dual resource allocation problem with a tree-structured constraint system, which can be solved to optimality in polynomial time.
This question was posed by Günter Ziegler in his book Lectures on Polytopes (1995) (exercise 0.15). Our bounds improve the best previously known bound of O(d!), which follows from a simple volume argument due to Imre Bárány. The known lower bounds are exponential, the current record being 3.6d for large enough d. (Thomas Christof and Gerhard Reinelt 1997). They follow from explicit polytopes in low dimensions. (Since the paper was published, superexponential lower bounds were proved by Bárány and Por.)
We also prove bounds for the number of faces of lower dimensions, and for integral convex polytopes with coordinates between 0 and k.
In general, this problem is NP-complete. Two conditions are described, which both together imply that it can be solved in O(n2) time. If only one of the conditions is satisfied, the problem remains NP-complete.
We prove that no asymptotically faster algorithms can solve these problems. This is done by establishing Ω(n log n) lower bounds in the algebraic computation tree model of computation. Moreover, we develop for every ε>0 approximation algorithms with linear running time O(n log(1/ε)) that deliver feasible schedules whose makespan is at most 1+ε times the optimum makespan.
This problem arises in the performance analysis of certain on-line navigation strategies. A closely related problem concerns curves with increasing chords.
We give a simple algorithm for the jeep which improves previous solutions of Derick Wood (1984) and Ute and Wilfried Brauer (1989), and we use a linear programming formulation to derive an upper bound which shows that our solution is optimal.
We consider the problem of moving one convex figure over another, minimizing the area of their symmetric difference. We show that if we just let the two centers of coincide, the resulting symmetric difference is within a factor of 11/3 of the optimum. This leads to efficient approximate matching algorithms for convex figures.
We describe algorithms which are guaranteed to find a covering on-line provided that the total volume (the sum of sid) is at least 2d+3. (The true bound is actually a little smaller.) This performance guarantee is quite strong, considering that the existence of an (off-line) covering can only be ensured if the total volume is at least 2d-1. The best previous on-line bound was 4d, due to Wlodek Kuperberg.
We reduce this problem to an online interval covering problem for q-adic
intervals. In this problem, we have to cover the unit interval [0,1] by
intervals whose sizes are negative powers of q. The endpoints of
an interval of side length q-k are restricted
to fall on multiples of q-k. For this class of
problems, we consider a kind of simple on-line algorithm, the so-called
"method of the n-th segment", for some integer n>1. We give
performance guarantees for these algorithms, and show that the cases n=q+1,
n=q+2, and n=q+1 give the best results, depending
on q. For q=2d these methods carry over
to the covering problem for d-dimensional cubes whose side lengths
are powers of 2.
As an application, it is possible to compute in polynomial time lower bounds for the minimum weight triangulation by solving an assignment problem or more general network flow problems. We exhibit an easy-to-recognize class of point sets where the minimum-weight triangulation coincides with the greedy triangulation. These are the sets for which the "light" edges, which are crossed by no shorter segments, form a triangulation.
In this paper we introduce a dynamic programming solution to the problem. It optimally encodes n words in O(nC+2) time, if the costs of the letters are integers between 1 and C. While still leaving open the question of whether the general problem is solvable in polynomial time our algorithm seems to be the first one that runs in polynomial time for fixed letter costs.
Some numerical results are reported, including solutions using Karp's integer programming formulations, which was solved using AMPL and CPLEX. The AMPL model is included in the appendix and in the PostScript-file.
2. Computational Geometry, Theory and Applications 8 (1997), pp. 87-95.
In general, the dispersion of a point set with respect to various classes of range spaces, like balls, squares, triangles, axis-parallel and arbitrary rectangles, spherical caps and slices, is the area of the largest empty range, and it is a measure for the distribution of the points. The main purpose of our paper is to give a survey about this topic, including some folklore results. Furthermore, we prove several properties of the dispersion, generalizing investigations of Niederreiter and others concerning balls. For several well-known uniformly distributed point sets we estimate the dispersion with respect to triangles, and we also compare them computationally. For the dispersion with respect to spherical slices we mention an application to the polygonal approximation of curves in space.
.
We discuss infinite 0-1-sequences which contain 2n different subwords of length n, for every n. We give a method which can in principle construct all such sequences, using purely combinatorial tools. After giving some concrete examples of sequences with different properties, we give alternate methods for constructing special classes of sequences of complexity 2n. The special class of sequences whose sets of subword are closed under complementation (exchanging 0's and 1's) is closely related to Sturmian sequences (sequences of complexity n+1), which are well understood.
Keywords: L-systems, infinite words, iterated morphisms
See Errata
We also give an algorithm for computing the metric, and we discuss possible implementations.
It is based on an idea that was recently introduced by Avis and Fukuda (1991) for their convex hull algorithm: The idea is to take a pivoting rule from linear programming and to “invert” the path that it takes to the optimal solution in all possible ways, thereby visiting all feasible bases in a depth-first search manner.
Theoretical considerations and computational tests have established that the method takes a long time for degenerate point sets. The reason is that, in the case of degenerate polytopes, the number of feasible bases may exceed the number of facets by far.
Therefore we propose a variation of the method that takes degeneracies into account explicitly: Instead of visiting all feasible bases, the algorithm visits all facets. The manner of visiting facets is analogous to the convex hull algorithm of Chand and Kapur (1970) as it is described and analyzed in Swart (1985).
We also propose a hybrid algorithm that saves time in nondegenerate cases, thus becoming even more competitive with Avis and Fukuda's algorithm.
This study concerns the design of an operating system for vehicles in an automated warehouse. The lay-out of the warehouse, and the number and properties of the vehicles are given. The objective is to maximize the throughput. Using a partial enumeration technique we simulate several alternatives for the control and interplay of the vehicles within a reasonable time horizon. A subproblem is solved by network flow techniques. The algorithm is implemented as part of an automatic control system, and it has lead to a satisfactory performance.
Different versions of the Sandwich algorithm use different rules for selecting the next evaluation point. We consider four natural rules (interval bisection, slope bisection, maximum error rule, and chord rule) and show that the global approximation error with n evaluation points decreases by the order of O(1/n2), which is optimal.
By special examples we show that the actual performance of the four rules can be very different from each other, and we report computational experiments which compare the performance of the rules for particular functions.
Note: A different proof of the Sandwich algorithm for the two bisection rules is given in the paper Sandwich approximation of univariate convex functions with an application to separable convex programming by Rainer E. Burkard, Horst W. Hamacher, and Günter Rote.
For integer k≥3, we study λ(k), the minimum value such that for each convex polygon P there exists a convex k-gon Q with λ(P,Q)≤λ(k). Among other results, we prove that 2.118...≤λ(3)≤2.25 and λ(k) = 1 + Θ(1/k2). We give an O(n2 log2n)-time algorithm which, for any input convex n-gon P, finds a triangle T that minimizes λ(T,P) among triangles. However, in linear time we can find a triangle T with λ(T,P)≤2.25.
Our study is motivated by the attempt to reduce the complexity of the polygon containment problem, and also the motion-planning problem. In each case we describe algorithms which run faster when certain implicit slackness parameters of the input are bounded away from 1. These algorithms illustrate a new algorithmic paradigm in computational geometry for coping with complexity.
We study the problem of shortest paths for a line segment in the plane. As a measure of the distance traversed by a path, we take the average curve length of the orbits of prescribed points on the line segment. This problem is nontrivial even in free space (i. e., in the absence of obstacles). We characterize all shortest paths of the line segment moving in free space when we measure the average orbit length of the two endpoints.
This problem of optimal motion has been solved by Gurevich and also by Dubovitskij, who calls it Ulam's problem. Unlike previous solutions, our basic tool is Cauchy's surface-area formula. This new approach is relatively elementary, and yields new insights.
Given n points in the plane and an integer k, between 4 and n, we want to compute the overall number of convex k-gons whose corners belong to the given point set. The first paper gives an O(nk-2) algorithm for this problem, improving over the trivial O(nk) algorithm. In the second paper, a refinement of the technique improves the complexity to O(nfloor(k/2)). This time bound has been subsequently improved to O(k n3) in the third paper.
Note: A more general treatment of the Sandwich algorithm (with different proofs) is given in the paper The convergence rate of the Sandwich algorithm for approximating convex functions by Günter Rote.
(For 3-clustering, an improved algorithm was later given in the paper Three-clustering of points in the plane.)
For a polygonal path in the plane or in space, we want to find an inscribed path of shortest circumference. In the open case the path can have different start and end point, whereas in the closed case these two points must coincide. We show that finding such shortest paths can be reduced to finding a shortest path in a planar channel. This problem can be solved in linear time in the open as well in the closed case. Finally, we deal with constrained problems where the wanted path has to fulfill additional properties; in particular, if it has to pass straight through a further point, we show that the length of such a constrained polygonal path is a strictly convex function of some parameter angle, and we derive an algorithm for determining such constrained polygonal paths efficiently.
A large variety of problems in computer science can be viewed from a common viewpoint as instances of algebraic path problems. Among them are of course path problems in graphs such as the shortest path problem or problems of finding optimal paths with respect to more generally defined objective functions; but also graph problems whose formulations do not directly involve the concept of a path, such as finding all bridges and articulation points of a graph; Moreover, there are even problems which seemingly have nothing to do with graphs, such as the solution of systems of linear equations, partial differentiation, or the determination of the regular expression describing the language accepted by a finite automaton. We describe the relation among these problems and their common algebraic foundation. We survey algorithms for solving them: vertex elimination algorithms such as Gauß-Jordan elimination; and iterative algorithms such as the classical Jacobi and Gauß-Seidel iteration.
The geodesic center of a simple polygon is a point inside the polygon which minimizes the maximum internal distance to any point in the polygon. We present an algorithm which calculates the geodesic center of a simple polygon with n vertices in time O(n log n).
In this paper we design and analyze algorithms for bicriteria minimum-cost flows. We present some complexity estimates, which lay the basis for the development of algorithms. We present some general results about the approximation of convex functions. We describe an exact algorithm and different realizations of the Sandwich approximation scheme, and we give results of numerical test calculations.
This method is applied to approximate the efficient point curve of the bicriteria minimum cost flow problem, which is a piecewise linear convex curve that may have an exponential number of breakpoints in the worst case.
It turns out that the left-to-right rule is better from a computational viewpoint, since is tries to avoid large changes in the parameters when solving successive minimum cost flow problems.
This thesis contains the paper The N-line traveling salesman problem and an extension of the paper Testing the necklace condition for shortest tours and optimal factors in the plane, written jointly with Herbert Edelsbrunner and Emo Welzl. The improvement of the latter part has not been published elsewhere.
The other solvable case concerns the necklace condition: A tour is a necklace tour if we can draw disks with the given points as centers such that two disks intersect if and only if the corresponding points are adjacent on the tour. If a necklace tour exists, it is the unique optimal tour. We give an algorithm which tests in O(n3/2) time whether a given tour is a necklace tour, improving an algorithm of Edelsbrunner, Rote, and Welzl (1988) which takes O(n2) time. It is based on solving a system of linear inequalities by the generalized nested dissection procedure of Lipton, Rose, and Tarjan (1979). We describe how this method can be implemented with only linear storage. We give another algorithm which tests in O(n2 log n) time and linear space, whether a necklace tour exists for a given point set, by transforming the problem to a fractional 2-factor problem on a sparse bipartite graph. Both algorithms also compute radii for the disks realizing the necklace tour.
2. In: "Automata, Languages, and Programming". Proceedings of the
14th International Colloquium on Automata, Languages, and Programming (ICALP),
Karlsruhe, Juli 1987. Editor: T. Ottmann. Lecture Notes in Computer Science
266, Springer-Verlag, 1987, pp. 364-375, (Zbl 636.68042, MR #88k:90065).
(This is a shortened and slightly modified version.)
Note. In his dissertation, Two solvable cases of the traveling salesman problem, Günter Rote has improved the time complexity of the algorithm for testing a given tour to O(n3/2). The new algorithm is based on solving a system of linear inequalities by the generalized nested dissection procedure of Lipton, Rose, and Tarjan (1979).
Our approach is based on the binary tree method of Dekel and Sahni (Binary trees and parallel scheduling algorithms, IEEE Transactions on Computers C-32, 1983). It requires a thorough analysis of the behavior of the sequential algorithm. We show that a parallel algorithm of Dekel and Sahni for the same scheduling problem relies on an erroneous assumption.
and a rectangular one due to Yves Robert and Denis Trystram, "An orthogonal systolic array for the algebraic path problem", Computing 39, 187-199.
Then we derive a new hexagonal array for solving a certain type of dynamic programming recursion that arises for example in context-free language recognition, in the construction of optimal binary search trees , or in the computation of an optimal sequence of multiplication for the evaluation of a product of matrices. A rectangular array for this problem is due to L. J. Guibas, H. T. Kung, and C. D. Thompson, "Direct VLSI implementation of combinatorial algorithms", Proc. CalTech Conference on VLSI. January 1979, pp.756-763. In general, the type of transformation used here allows arbitrary systolic arrays to be transformed into unidirectional ones, which are preferable from the points of view of fault tolerance, two-level pipelining, and algorithm partitioning.
For the case of maxmin equations, we derive upper bounds on the number of solutions by using network flow techniques. We also disprove a conjecture of Czogala, Drewniak and Pedrycz (1982) that an n×n system of maxmin equations has less than 2n minimal solutions.