Combinatorial Generation of Planar Sets

We introduce a multi-dimensional generalization of the Euclidean algorithm and show how it is related to digital geometry and particularly to the generation and recognition of digital planes. We show how to associate with the steps of the algorithm geometrical extensions of substitutions, i.e., rules that replace faces by unions of faces, to build finite sets called patterns. We examine several of their combinatorial, geometrical and topological properties. This work is a first step toward the incremental computation of patterns that locally fit a digital surface for the accurate approximation of tangent planes.


Introduction
Digital geometry mainly deals with sets of discrete elements considered to be digitized versions of Euclidean objects.A digital surface may be seen as a mesh of unit square faces whose vertices have integer coordinates.A challenge is to decompose digital surfaces into patches, such as pieces of digital planes.
A digital plane has been analytically defined as a set of points with integer coordinates lying between two parallel planes [1].The topological properties of such object depend on the thickness of the strip.If one considers the graph whose nodes are points of integer coordinates and whose edges link points which are distant to each other less than a given thickness, one can efficiently generate an arbitrary piece of digital plane using a breadth-first search and the analytical definition as a set-membership predicate.Furthermore, given a finite point set, one can decide whether this set belongs to a digital plane or not in linear time using linear programming.See for instance [2] and, for a review on digital planarity, [3].However, a linear programming solver does not help so much for the analysis of digital surfaces, because one does not know which point set should be tested to obtain patches that approximate the tangent planes of the underlying surface.
B Tristan Roussillon tristan.roussillon@liris.cnrs.fr 1 UMR5205, INSA Lyon, CNRS, UCBL, LIRIS, Univ Lyon, 69622 Villeurbanne, France In order to cope with this problem, plane-probing algorithms have been developed [4,5].Their main feature is to decide on-the-fly how to probe a given point set and locally align a triangle with it, without requiring any user input.However, if probing for points in a sparse way is perfect for digital planes, it is not enough for non-convex parts where the triangles may jump over holes or stab the digital surface.Therefore, that approach also requires to associate pieces of digital planes to the triangles and check whether they fit the digital surface or not.
In this paper, we make a first step toward the incremental generation of planar patches, which can be used during the execution of a plane-probing algorithm (see Fig. 1).To do that, we take advantage of the combinatorial properties of digital planes.
The particular case of digital lines has been studied for a long time in different contexts and has led to many applications.One key result is that digital lines are hierarchical point sets whose structure is exactly described by the continued fraction expansion of their slope and strongly relies on the Euclidean algorithm.See for instance [6] and, for a survey on digital straightness, [7].
There has been much effort done in order to find similar results in three dimensions despite the lack of a canonical algorithm and the diversity of existing generalizations of the Euclidean algorithm.Some combinatorial results, involving symmetries, piece exchanges and flips, have been stated thanks to an appropriate representation of digital planes [8].However, most of other related works depend on a multi-Fig.1 Local approximation of a digital sphere of radius 63 by planar patches (in green) from two starting points (in blue).The implementation combines the plane-probing algorithm H [5] with the generation method of Sect. 3 (Color figure online) dimensional generalization of the Euclidean algorithm such as Brun, e.g., [9], Jacobi-Perron, e.g., [10], Fully subtractive, e.g.[11], or a mix of several of them [12].Those algorithms have been used to generate digital planes from a normal vector.There are two different but closely related construction schemes in the literature.The first one is based on union and translation of point sets [12][13][14].It has been used mostly to construct the thinnest digital plane that is connected [11,13,15,16].The second one is based on a description of standard digital planes as unions of square faces and uses geometrical extensions of substitutions, i.e., rules that replace square faces by unions of square faces.Since the pioneer work of Arnoux and Ito [17], formalism has been used for instance in [9,10,13,18,19].Both construction schemes incrementally generate sets so that the current set will be included in the next one, but suffer from topological and geometrical limitations.This work is based on the second scheme that generates few elements in comparison with the first scheme, e.g., [12].
This paper is an extension of [20] and is organized as follows.We first present a multi-dimensional generalization of the Euclidean algorithm in Section 2. It includes a far larger class of existing algorithms than the generalization proposed in [20].Furthermore, we do not only show that plane-probing algorithms look like a geometrical version of a three-dimensional Euclidean algorithm, but also show that all the classical three-dimensional Euclidean algorithms like Brun or Jacobi-Perron can be translated into plane-probing algorithms.In Section 3, we explain how one can generate unions of faces from the output of the above-mentioned algorithms.We also prove several combinatorial properties of the generated sets.Several of them are new, others are stronger results or have a more self-contained proof than the ones included in [20].

Generalization of the Euclidean Algorithm
We first propose in Sect.2.1 a multi-dimensional generalization of the Euclidean algorithm: Algorithm 1.Its validity and termination are then established in Sect.2.2.Algorithm 1 is linked with several other known algorithms in the last two subsections.

Notations, Definitions and Algorithm
Let {e 1 , . . ., e d } be the canonical basis of R d , with d ≥ 2. We denote 0 the origin and 1 = d i=1 e i the vector with all coordinates equal to 1 whatever the dimension d.Let I d be the d × d identity matrix.Matrices and vectors are written in bold, respectively, with upper-case and lower-case letters.Vectors are thought as column vectors, while their transpose, in front of which is put the letter t, is thought as row vectors.For instance, the i-th coordinate of x is equal to t e i x.However, for sake of shortness, t e i x will be also denoted x i .
Let N d be the set of vectors of length d with nonnegative integer coordinates.Let ≤ be the coordinatewise partial order defined such that, for all pairs x, y ∈ N d , x ≤ y if and only if x i ≤ y i for all i ∈ {1, . . ., d}.Endowed with the order relation ≤, N d is a lattice.
The Euclidean algorithm is often used to compute the greatest common divisor of two numbers, but in what follows we are only interested in the sequence of operations performed when the algorithm is executed.Since multiplying the two numbers by a constant does not change the sequence of operations, we can assume that the greatest common divisor is equal to 1.More generally, for d numbers, we can similarly focus on vectors of N d whose coordinates are relatively prime, i.e., on the set Note that P d contains e 1 , . . ., e d and 1, but not 0.
We now define the set of inputs accepted in Algorithm 1: We will only consider ({1}) and ({e 1 , . . ., e d }).In the first set, the coordinates of the vectors are coprime and strictly positive, whereas they are coprime and nonnegative in the second one.Algorithm 1 takes as input a set T ⊂ P d whose elements are called terminal and a vector v ∈ (T ).While v is not terminal, it is iteratively transformed into another vector by matrix multiplication.The matrices must belong to U d , defined as the set of d × d matrices having −1 at the intersection between the k-th row and the l-th column with distinct k, l ∈ {1, . . ., d} and the same entries as I d elsewhere.Note that the elements of U d are matrices with determinant 1 and are thus unimodular.Note that multiplication by a unimodular matrix U preserves vectors of coprime coordinates, i.e., v ∈ P d ⇒ Uv ∈ P d .This fact is used in the next section.

Algorithm 1 Multidimensional Euclidean algorithm.
Require: a terminal set T and a vector v ∈ (T ) Ensure: a list of matrices (U n ) n∈{0,...,N } and a terminal t ∈ T such that v ← Uv 6: end while 7: return lst, v

Validity, Termination and Complexity
In this subsection, we answer two questions regarding Algorithm 1: is there always a matrix U ∈ U d satisfying the constraints on line 3? Does the algorithm always terminate?
Before answering, we introduce the following technical result: Lemma 1 Let T be either equal to {1} or {e 1 , . . ., e d }.Let us consider a vector v ∈ (T ) and a matrix U ∈ U d containing −1 at the intersection of the k-th row and the l-th column, with distinct k, l ∈ {1, . . ., d}.
The following relations involving the coordinates of v, imply both Uv = v and Uv ∈ (T ).Since t e k (Uv) = v k −v l , putting together the above results concludes.
We are now able to answer the first question about the existence of a matrix U ∈ U d for two specific values of T .
Lemma 2 Let T be either equal to {1} or {e 1 , . . ., e d }.If v ∈ (T )\T , then there exists U ∈ U d such that Uv = v and Uv ∈ (T ).
Proof Since there are different corner cases for the two possible values of T , we separately deal with them.
Case T = {1}: If v ∈ (T )\T , there are two consequences.First, 1 ≤ v, which means that the coordinates of v are greater than or equal to 1. Second, v ∈ P d and v = 1, which means that the coordinates of v are not all equal.As a result, there exist two distinct k, l ∈ {1, . . ., d} such that Case T = {e 1 , . . ., e d }: Again, if v ∈ (T )\T , there are two consequences, but slightly different.First, there is t ∈ {e 1 , . . ., e d } such that t ≤ v, which means that all coordinates of v are greater than or equal to 0. Second, v ∈ P d and v = t, which means that v contains at least two nonzero coordinates.As a result, there exist two distinct k, l ∈ {1, . . ., d} such that v l ≥ 1 and v k − v l ≥ 0.
In both cases, using Lemma 1, we conclude that there exists a matrix U ∈ U d , whose entry (k, l) equals −1, such that Uv = v and Uv ∈ (T ).
We are now able to study the termination and complexity of Algorithm 1 for two specific values of T .
Theorem 3 Let T be either equal to {1} or {e 1 , . . ., e d }.Let ω be equal to the 1-norm of the input vector v. Algorithm 1 terminates after less than ω iterations.

Proof
The coordinates of v are always nonnegative, and the matrix multiplication consists in subtracting a nonzero coordinate from another.The integral number v 1 thus strictly decreases throughout the iterations.In addition, v is always in (T ) by Lemma 2. The algorithm stops when v reaches a bottom element in (T ), i.e., an element of T .Consequently, the algorithm terminates when v 1 = d if T = {1} or when v 1 = 1 if T = {e 1 , . . ., e d }, after less than ω iterations.
In the rest of the paper, we will mainly focus on the following finite sequences: In addition, a valid sequence reduces a vector v if and only if there exists By design and by Theorem 3, Algorithm 1 returns a sequence of matrices that reduces the input vector v. Furthermore, the output of the algorithm provides a way of representing the input thanks to the following equalities: Note that a n is generally not equal to v n and, for n = N , is not equal to v.

Examples in Low Dimensions
In this subsection, we show how Algorithm 1 relates with the Euclidean algorithm in 2D and some of its well-known generalizations in 3D.

In 2D
The set U 2 only contains two matrices: When fed up with the terminal set T = {1} and the input vector v, Algorithm 1 does the following: • otherwise, v must be equal to 1, stop and return, which is exactly the Euclidean algorithm in its additive form.
For instance, with T = {1} and t v = (5, 3) as input, Algorithm 1 returns the list (I 2 , L, R, L).Hence, the following equivalence: Note that using the terminal set T = {e 1 , e 2 } instead of T = {1} results in replacing 1 with L −1 e 2 .

In 3D
In dimension 2, it is clear to decide which coordinate has to be subtracted to the other, whereas in dimension 3, it is not the case, hence the diversity of existing generalizations.See [21] for a list as well as a detailed analysis of several of them.Several existing algorithms stick to a convention.For instance, assuming w.l.o.g.0 ≤ v 1 ≤ v 2 ≤ v 3 , we have: In both strategies, the subtraction is encoded in a matrix U ∈ U 3 .Furthermore, for any vector of coprime and strictly positive coordinates, Selmer can be reproduced using Algorithm 1 and T = {1}.In order to be able to deal with null coordinates, it is enough to choose T = {e 1 , e 2 , e 3 }.Brun can also be reproduced using Algorithm 1, but only with T = {e 1 , e 2 , e 3 } because a zero coordinate may appear whenever the two largest coordinates are equal.
For instance, with T = {e 1 , e 2 , e 3 } and t v = (1, 2, 2) as input, using Brun in line 3 for the matrix selection and taking the index order in case of ties, Algorithm 1 returns the terminal e 1 and the following list: Hence, the following equivalence: .
Other less elementary strategies may be used as well.For instance, One can check that we can choose T = {e 1 , e 2 , e 3 } as terminal set for these two strategies by possibly applying the two-dimensional Euclidean algorithm once a coordinate has reached the value 0 in the Jacobi-Perron case.Even if the above matrices are not in U 3 , one can check that they can be replaced by a product of elementary matrices, which are all in U 3 .Algorithm 1 is thus also a generalization of such strategies.

Relation with Digital Geometry
The goal of this subsection is to relate Algorithm 1 to algorithms recently developed in the context of digital geometry [5,22].This relation is based on the transpose of the partial product U n . . .U 0 , i.e., t U 0 • • • t U n .Even if the abovementioned algorithms are three-dimensional, the following propositions are valid in a d-dimensional space.They gather several properties involving the matrices t U 0 • • • t U n as well as v n and a n introduced in Definition 1: Proposition 4 For all n ∈ {0, . . ., N } and i ∈ {1, . . ., d}:

Proof
In both cases, it is enough to use the definitions of v n and a n : and let B n be defined as follows: Proof (Sketch) For both definitions of B n , it is enough to notice that the dot product between a n and each vector of the tuple is equal to 0 by Proposition 4, item 4. The fact that the tuple is a basis of the lattice n comes from the fact that the matrix t U 0 • • • t U n has determinant 1.
Figure 2 illustrates the action of the matrix t U 0 In other words, it deforms an orthonormal basis to a basis that is more and more aligned with the plane of normal v 0 because the sum of the quantities 3}} is smaller and smaller by Proposition 4, item 4. At the last step, the i-th column of t U 0 • • • t U N is perfectly aligned with the plane of normal v 0 when t e i • v N = 0, but not when t e i • v N = 1.Despite this, one can always deduce d − 1 vectors perfectly aligned with the plane of normal v 0 by Proposition 5.
The next proposition requires the following definition.

Definition 3
The digital hyperplane of normal v ∈ P d is the infinite set: Proposition 6 For all n ∈ {0, . . ., N } and i ∈ {1, . . ., d}: We have implicitly represented the scalar projection of the basis vectors in the direction of v 0 with the help of dotted lines.The scalar products 3}} are equal to the coordinates of v N and are thus all equal to 0 or 1 Proof We first have where the last equality comes from t v 0 1 = v 0 1 and Proposition 4, item 4.
We now focus on (ii).By Definition 3, The second inequality is equivalent to ± t e i v n > v 0 1 , which is false because ± t e i v n ≤ v n 1 ≤ v 0 1 .As a result, only the first inequality, which is equivalent to ± t e i v n ≤ 0, is true.We thus clearly get t e i v n = 0 if we have both Let us now consider the predicate InPlane := "is x inP?" for any hyperplane P ∈ {P v | v ∈ P d } of unknown normal vector.Proposition 6 shows that we are able to check if a given coordinate of the unknown normal vector is strictly positive or equal to 0 by probing the hyperplane with the predicate InPlane.By Lemma 1, such tests on coordinates are a way of selecting a possible matrix on line 3 of Algorithm 1.As a consequence, a geometrical version of Algorithm 1 can be obtained by replacing the vector v by the predicate InPlane and changing the way a new matrix is selected accordingly.The three-dimensional plane-probing algorithms H, R [5] and L [22] rely on exactly that idea.
The difference between the three algorithms lies in their set of possible operations.Those operations can be represented up to symmetries by a matrix: In H, a is set to 1 and b to 0, which means that the matrix belongs to U 3 .In R, a is also set to 1, but b can be any nonnegative integer.In L, both a and b can be any nonnegative integers, provided that they are not both null.The matrices used in R and L are not necessarily in U 3 .However, if they are not, they can be replaced by a product of elementary matrices, which are all in U 3 .We can therefore assume that H, R and L produce a valid sequence of matrices (U) 0≤n≤N .In addition, that sequence reduces the normal vector of the underlying hyperplane because: As in (1), we have a N = v [5, Corollary 4].In other words, those algorithms, which compute a N only from the predicate InPlane, are indeed able to retrieve the normal vector v of P.
To end the section, note that one can even further generalize our approach if one replaces InPlane by another set-membership predicate involving a given digital surface as shown in Fig. 1 and [5, Section 5].

Discussion
In this subsection, we compare the plane-probing algorithms H, R and L to classical three-dimensional Euclidean algorithms, in particular, Brun, Selmer, Poincaré and Jacobi-Perron presented above.
What makes those two kinds of algorithms very different from each other lies in the way they select a matrix in line 3 of Algorithm 1.On one hand, plane-probing algorithms use a geometrical criterion involving the column vectors of t U 0 • • • t U n .On the other hand, classical threedimensional Euclidean algorithms, even when run with the predicate InPlane, use an arithmetical criterion involving the coordinates of U n • • • U 0 v. Non-uniqueness.In general, several matrices can be chosen.The arithmetical criterion aims at reducing the set to one, but if v n has equal coordinates, there is still more than one possible matrix to get v n+1 .An often-used convention to break ties is to compare the indices of the equal coordinates.The geometrical criterion is based on an in-sphere test [5] and usually reduces the set to one, but in case of co-spherical points, again, there is more than one possible matrix.In such cases, one can resort to a lexicographic order.
If the point 1 , with distinct k, l ∈ {1, 2, 3}, belongs to P-such a point is depicted with a red disk in Fig. 3-then there exists a unique matrix U ∈ U 3 containing −1 at the intersection of the k-th row and the l-th column such that As a result, by Proposition 6 and Lemma 1, U is a possible candidate for U n+1 .With that geometrical point of view, in Fig. 3, choosing a red disk is like choosing a matrix.
Then, an equality between two coordinates of v 1 is detected, because there are two points symmetric about 1, namely y and y , that do not belong to P. From the two candidate points the second one is chosen because it has the greatest index: 3 (see (b) and (c) in Fig. 3).
Regarding H, the red and blue disks are in a co-spherical position at the very beginning, but z is chosen according to the lexicographic order on points (see (d) and (e) in Fig. 3).Then, z is chosen because the sphere passing by the triangle and z does not contain z (see (e) and (f) in Fig. 3).

Impact of previous choices.
In the arithmetical approach, the choice of U n+1 only depends on the coordinates of v n , regardless of how v n has been obtained.If, reducing two different input vectors v and v , one has v n = v n , the next matrices will be exactly the same for the two sequences.This is not the case in the geometrical approach, because the choice of U n+1 depends on the whole sequence.In that approach, even if v n equals v n , the next matrices will be in general different for the two reductions.

Geometrical properties.
The column vectors of t U 0 • • • t U n exhibit interesting properties in the geometrical approach.One such property is that the basis of the rank-two lattice n returned by R (resp.H) is experimentally always (resp.almost always) reduced [5].It has even been proven that the basis returned by L is always reduced [23].Another property concerns the points selected during the execution of the algorithm, i.e., the set: ( Experimentally, those points are in average much closer to 1 in the geometrical approach than in the arithmetical approach.This observation is illustrated in Fig. 4 and supported by the experiments described below.
There are 6578833 vectors with relatively prime components in that range.For all of them, we checked whether the returned basis is reduced or not.In the latter case, we also computed the number of reductions needed to reduce it a posteriori.Results are reported in Table 1.The returned bases are generally not reduced with the arithmetical approach, while they generally are with the geometrical approach.
Fig. 3 The digital plane of normal t v = (2, 3, 5) is depicted in gray.The vector v has been reduced with Selmer in (a-c) and H in (d-f).Only the steps n = 0, 1, 2 are shown.For every step, the three-point tuple is depicted as a blue triangle, while the set of ,k =l is depicted as a red hexagon, which is a usual representation for plane-probing algorithms.
The vertices are depicted with disks (resp.circles) if they belong (resp.do not belong) to the digital plane Fig. 4 The digital plane of normal t v = (1, 13, 17) is depicted in gray.The vector v has been reduced with Selmer in (a) and H in (b).For every step n, the three-point tuple Besides H, R and L, other algorithms are denoted by their initials.The average number of reductions is computed only among non-reduced bases Fig. 5 The maximal distance between 1 and every point selected during execution of the algorithm is plotted against the magnitude of the input vector (see the text for the experimental settings).The line labeled by P is truncated, because of its extremely large slope Furthermore, we computed for all vectors the maximal distance between 1 and every point of the set described in (3).
In Fig. 5, we plotted that maximal distance against the magnitude of the input vector.In order to smooth the lines, we discretized the magnitude of the input vector into bins of size 10 and took the average for every bin.The lines for H, R and L lie upon each other because those three algorithms often make the same choices for the considered vectors.More importantly, those lines lie quite below the other ones, which means that the geometrical approach is much more local than the arithmetical one, as suggested by Fig. 4.

Pattern Generation with Extensions of Substitutions
As discussed in the previous section, Algorithm 1 provides a terminal and a sequence of matrices that reduces the input vector v.A variant consists in passing in the predicate InPlane instead of v as with plane-probing algorithms.In this section, we show how to generate a piece of digital hyperplane of normal v from a sequence of matrices returned by Algorithm 1 or its variant.Our method is based on a description of digital hyperplanes as unions of faces, recalled in Sect.

Digital Hyperplanes as Unions of Faces
For x ∈ Z d and i ∈ {1, . . ., d}, we define the (d − 1)dimensional face in x perpendicular to the direction i as the following subset of R 3 : The point x will be called the origin of the face and i the type of the face.A three-dimensional illustration is given in Fig. 6a.
Definition 4 (Fig. 6b) The stepped hyperplane of normal v ∈ P d is the infinite set: The link between digital hyperplanes (Definition 3) and stepped hyperplanes (Definition 4) is made explicit in the following proposition.Note that the vertices of a face (x, i * ) ∈ S v form the set {x + t | t ∈ T i }.
We first show that all the vertices of a face (x, i * ) ∈ S v are in P v .By definition, (x, i * ) ∈ S v is equivalent to 0 ≤ t vx < t ve i .By adding t vt, t ∈ T i , we get t vt ≤ t v(x + t) < t v(e i + t).In addition, we have by definition 0 ≤ t vt and t ve i + t vt < t ve i + t v j∈{1,...,d}\i e j = t v1 = v 1 .We have therefore 0 ≤ t v(x + t) < v 1 , i.e., x + t ∈ P v for all t ∈ T i by (2).Now, we show that for all points y ∈ P v , there is a face of type i in S v such that y is one of its vertices.More precisely, we choose i such that t ve i is maximal, i.e., i = arg max j∈{1,...,d} t ve j .In addition, we number the K partial sums of T i according to the following order: Note that t K = j∈{1,...,d}\i e j and, because of our choice for i, t vt k+1 − t vt k ≤ t ve i for all k ∈ {1, . . ., K − 1}.(5).For all k ∈ {1, . . ., K − 1}, if t vt k ≤ t vy < t vt k+1 , then 0 ≤ t v(y − t k ) < t vt k+1 − t vt k ≤ t ve i , i.e., (y − t k , i * ) ∈ S v by (5).

Words, Substitutions and Their Extensions
In this subsection, we present the framework we use for our generation method.It is based on geometrical extensions of substitutions: rules that replace faces by unions of faces in the same way that substitutions replace letters by words.
We consider a d-letter alphabet A d := {1, . . ., d}.A word is an element of the free monoid A d generated by A d .The empty word is denoted by , and the concatenation operation is denoted by • or is left implicit.The abelianization mapping f : where |w| i denotes the number of occurrences of the letter i in w.

Example 1 The word
A substitution σ over A d is a non-erasing endomorphism of A d , completely defined by its image on the letters of A d by the relation σ (w • w ) = σ (w) • σ (w ).The incidence matrix M σ of σ is the d ×d matrix whose i-th column is f (σ (i)) for every i ∈ A d .For any word w ∈ A d , applying σ then f is the same as applying f then M σ , i.e., M σ f (w) = f (σ (w)).

Example 2 For
For two substitutions σ , σ , we denote by • their composition: (σ • σ )(w) := σ (σ (w)).Note that the above commutation relation implies that A substitution is said to be unimodular if det (M σ ) = ±1.In that case, M σ is invertible and there is a formula involving M −1 σ for the (d −1)-extension of σ [24, Proposition 2.11].From now on, we assume that all substitutions are unimodular.The (d − 1)-extension of σ acting on (d − 1)dimensional faces is then defined as follows: The subscript means that the union is done over all pairs ( j, s) such that j ∈ A d and σ ( j) = p • i • s, where i is a letter, whereas p and s are words.Example 3 Let us consider the substitution σ : 1 → 1, 2 → 12, 3 → 1213 over A 3 and let us consider the image by E 2 (σ ) of (0, 1 * ).Since the type of that face is 1, the union in ( 6) must be done over all pairs ( j, s) such that j ∈ A 3 and σ ( j) = p • 1 • s.There are four such pairs: (1, ), (2,2), (3,213), (3,3).Furthermore, since the origin of the face is 0, every pair ( j, s) gives the face Geometrically, the action of E d−1 (σ ) can be considered as a digitization of the action of M −1 σ (see Section 2 and Fig. 3 in [19]).Example 3 is illustrated in Fig. 7 with that point of view.
We extend (6) to unions of faces: It is convenient to use the following notation for translations of faces: if (x, i * ) is a face and y is a vector, then (x, i * ) + y := (x + y, i * ), which extends in a natural way to union of faces.One can then check that because M −1 σ x does not depend on the union in (6).The images by E d−1 (σ ) of the faces (0, i * ) for 1 ≤ i ≤ d thus suffice to define E d−1 (σ ).
For two substitutions σ , σ , we also denote by • the composition of their extensions: (F) .We have the following key property [25, Proposition 1.2.4,item (1)]: From ( 6), ( 7), ( 8) and ( 9), one can derive the following proposition, which shows that one can incrementally compute the image by E d−1 (σ • σ ) of a face (0, i * ) as a union-translation scheme: Example 4 Let us consider the following substitutions: As shown in Fig. 8, σ describes how the images by E 2 (σ • σ ) relate to the images by E 2 (σ ).First, let us consider E 2 (σ • σ )(0, 1 * ), which is red in (b).It has been obtained as the union of two previous sets: E 2 (σ )(0, 1 * ), which is red in (a), and E 2 (σ )(0, 3 * ), which is blue in (a), because letter 1 belongs to both σ (1) and σ (3) and is also in the last position, which means with no suffix and thus, no translation.
Let us now consider It is a copy of E 2 (σ )(0, 3 * ), which is blue in (a), translated by M σ •σ e 1 = e 1 , because letter 3 appears only once in σ ( 3), followed by the single-letter word 1.

Sequences of Substitutions and Patterns
A sequence of matrices (U n ) 0≤n≤N that reduces a vector v is introduced in Definition 1.In this subsection, we define its associated sequence of substitutions.
Definition 5 (Associated substitutions) Let (U n ) 0≤n≤N be a sequence of matrices that reduces a vector v ∈ P d .Its associated sequence of substitutions starts with letter i.
Example 5 Let us consider the following matrices: Remark that ( t U 1 ) −1 is the incidence matrix of two distinct substitutions: Actually, the inverse of the transpose of all matrices in U d is the incidence matrix of two distinct substitutions.According to Definition 5, σ 1 := σ because σ (i) starts with letter i for all i ∈ A d , which is not the case for σ .
To save space, we set In accordance with Definition 1, we have thus: Table 2 shows, for a given sequence, how and a n are related.
We will now focus on the image by E d−1 (σ n•••0 ) of faces with origin at 0. The result is a finite set of faces that we call pattern.A full example is given by Table 2 and Fig. 9.
Definition 6 [Pattern] Let (U n ) 0≤n≤N be a sequence of matrices that reduces a vector v ∈ P d , and let (σ n ) 0≤n≤N be its associated sequence of substitutions.Let W 0 be the faces of type i and origin 0 such that t v N e i = 1, i.e., From a sequence of substitutions (σ n ) 0≤n≤N , there are at least two ways of generating patterns.In the first method, we compute the substitution σ n•••0 and then, apply the formula given by ( 6) to get W n .The second method is incremental: we compute W n from W n−1 as in a union-translation scheme thanks to Proposition 8.One can check that the timecomplexity of both methods linearly depends on the number of faces in W n .By Proposition 10 or Theorem 11, item (iii), proven in the next subsection, this number is equal to a n 1 .As a result, the overall time-complexity of both methods is O( a n 1 ).
From a vector v, one has to first compute a sequence of matrices that reduces v and then, compute the associated sequence of substitutions.The overall time-complexity thus depends on the reduction algorithm.For most of them, e.g., Brun, Selmer, H [5], every matrix choice takes a constant time, while there are at most O( v 1 ) choices in total (Theorem 3).In that case, the reduction stage takes O( v 1 ) time.Then, the generation of W N also takes O( v 1 ) time, because a N = v by (1).However, note that there are reduction algorithms, e.g., R [5], L [22], which does not compute new matrices in constant time, but in logarithmic time, thus increasing the whole time-complexity by a logarithmic factor.

Properties of Patterns
In this subsection, we show several properties for the patterns defined in Definition 6.Even if a pattern is a set of faces, we will say that a point x is in a pattern W, denoted by x ∈ W by abuse of notation, whenever there exists a face w ∈ W such that x is a vertex of w.
The following proposition shows that the patterns always contain some very specific faces and points.

Proposition 9
Let us assume that v N ∈ {1} ∪ {e 1 , . . ., e d }, and let A v N be equal to {i ∈ A d | t v N e i = 1}.For all n ∈ {0, . . ., N } and for all i Proof Since every word σ 0 (i), . . ., σ n (i) starts with i, σ n•••0 (i) also starts with i.Let s be the suffix such that In that case, We have therefore: In addition, the point 4) and thus is in W n .
In plane-probing algorithms, the point used to select a new matrix at a step n is exactly 1 − M −1 σ n•••0 e i (see Proposition 6 and (3)) and thus, lies in W n by Proposition 9.The convention used in Definition 5 in order to associate only one substitution to a given matrix turns out to guarantee that the generated patterns contain those points.
The following proposition shows that all faces in W n have a specific location in space with respect to direction a n .It is key for several other properties stated in Theorem 11.

Proposition 10 Let us assume that v
For all n ∈ {0, . . ., N } and for all j ∈ A d , for every face (x, j * ) ∈ W n , the quantity t a n x appears once and only once in the integer interval {0, . . ., t a n e j − 1}.
Proof By Definition 6 and according to (6), all faces of type j in W n are located along a path encoded by σ n•••0 ( j).
If there is only one letter i in σ n•••0 ( j), there is only one face in E d−1 (σ n•••0 )(0, i * ) whose origin is 0 and the proposition is obviously true.
Otherwise, let us denote by a and b two consecutive letters in σ n•••0 ( j).In other words . By abuse of terminology, we call the faces a and b and say that they are consecutive.The difference Fig. 9 The patterns are the images by E 2 (σ n•••0 ) of the faces with origin at 0. The sequence of substitutions is given in Table 2.It is associated with a sequence of matrices obtained from the reduction of (1, 2, 4) using algorithm H [5].In the top row, the images of (0, 1 * ), (0, 2 * ) and (0, 3 * ) by E 2 (σ n•••0 ) are displayed, respectively, in red, green and blue.In the bottom row, the faces of type 1, 2, 3 are, respectively, colored with red, green and blue (Color figure online) between the origins is equal to However, the quantity t a n M −1 Remind that if t v N e b = 1, the face of (M −1 σ n•••0 e b , j * ) is kept in W n .Otherwise, t v N e b = 0 and the face is discarded.As a consequence, the dot product between a n and the difference between the origins of a and b is 1 if b is in W n and 0 otherwise.The set { t a n x | (x, j * ) ∈ W n } thus form an integer interval whose lower bound is 0. The upper bound plus one is given by the total number of faces of type j in W n , which is equal to the number of letters We are now able to gather several properties of patterns into our main theorem: Two extra properties hold if v N = 1: (iv) ∀n ∈ {1, . . ., N }, W n−1 ⊂ W n , (v) ∀n ∈ {0, . . ., N }, W n contains points from which one can compute the vectors of a basis of n .
Proof (i) By Proposition 10, if W n contains at least one face of type j ∈ A d , then, for every face (x, j * ) ∈ W n , we have 0 ≤ t a n x < t a n e j , which means that (x, j * ) is in the stepped plane of normal a n according to (5).
In addition, if n < N , note that the coordinates of a n are always nonnegative and the multiplication by t M σ n consists in adding a coordinate to another, so that a n ≤ a n+1 .By induction, we get a n ≤ a N and in particular t a n e j ≤ t a N e j .As a consequence, all faces of W n are also in the stepped plane of normal a N .
(ii) By Proposition 10, if W n contains at least one face of type j ∈ A d , then, for every face (x, j * ) ∈ W n , the quantity t a n x appears once and only once in the integer interval {0, . . ., t a n e j − 1}.Since the copies of W n are translated along the lattice n , i.e., by a translation vector whose dot product with a n is zero, the union of all copies is the set {(x, j * ) | 0 ≤ t a n x < t a n e j }.Taking into account all face we get the stepped plane of normal a n according to (5).
It remains to prove that two distinct copies of W n cannot share a common face (x, i * ).Let us denote by W and W the two copies and by t = 0 the translation vector such that By definition of n , we have t a n t = 0, which implies that t a n x = t a n (x − t).However, by Proposition 10, there are no two faces in W n whose origin has the same dot product with t a n .Since that fact is obviously translation invariant, the same is true for W , which means that (x, i * ) and (x − t, i * ) cannot be both in W . Consequently, (x, i * ) cannot belong to both W and W .
(iv) if v N = 1, we have for all n ∈ {1, . . ., N }: where the inclusion comes from the trivial fact that for all j ∈ A d , σ n ( j) may always be written p • i • , where i is any letter in A d , i.e., the word σ n ( j) ends with a letter i ∈ A d .(v) By Proposition 5, the vectors of a basis of n are In addition, by Proposition 9, W n contains the points Consequently, for all i ∈ {2, . . ., d}, we have: which concludes the proof.
Theorem 11 shows that our method provides a sequence of patterns, all included in a given stepped plane (i).The last pattern periodically generates the underlying stepped plane without hole or overlap (ii) and is of minimal size (iii) because if one sum the normal vector of all its faces, we get exactly a N , i.e., the normal of the stepped plane, and one cannot expect to find a smaller pattern with the same normal.
In addition, if v N = 1, the patterns form a hierarchical set by inclusion (iv) and they contain points from which one can compute the period vectors (v).

Geometrical and Topological Remarks
In this subsection, we discuss two additional and desirable properties: shape compactness and connectivity.
Our first remark is that the choice of the algorithm has a great impact on the shape of the pattern (see Fig. 10).
Experiments were conducted to measure how far the patterns are from the origin.We generated patterns using the reduction algorithms H, Brun (B), Selmer (S) and Jacobi-Perron (JP) on 811214 vectors with relatively prime components ranging from (1, 1, 1) to (100, 100, 100).We discarded R and L, which behave almost exactly like H in that range, as well as Poincaré, which probes for points very far away from the origin as shown in Fig. 5.For all input vectors, we computed the maximal distance between every point of the pattern and the origin.In Fig. 11, we plotted that maximal distance against the magnitude of the input vector.We smoothed the lines as in Fig. 5.The patterns generated from the geometrical approach (H) are on average much closer to the origin than those generated from the arithmetical approach (B, S and JP).
In our opinion, this is because the basis of the lattice n returned by the algorithm R and L (resp.H) is always (resp.almost always) reduced [5,23].There are no such results with classical three-dimensional Euclidean algorithms, because they are not geometry-aware.A perspective is to bound the distance of the pattern boundary to the origin.For the algorithms H, R and L, such bound may involve geometrical arguments based on the empty-circumsphere criterion.
Our second remark is about the connectivity of the patterns.In general, the patterns may be neither vertex-nor edge-connected.For instance, with Selmer, an example of non-connected pattern is given in Fig. 10c.On the contrary, with Brun and Jacobi-Perron, the patterns are always edgeconnected as proven in [10,25].Note that the tie-break rule mentioned in Sect.2.5 is important to guarantee connectivity.
For all vectors with relatively prime components ranging from (1, 1, 1) to (100, 100, 100), we checked whether the pattern is edge-connected or not.Results are reported in Table 3.
Unfortunately, there is an opposition between proximity and connectivity.Indeed, Brun and Jacobi-Perron provide patterns that are edge-connected, but also much less close to the origin than the patterns obtained from the geometrical approach, which may be not connected, however.It seems  11 For vectors with relatively prime components ranging from (1, 1, 1) to (100, 100, 100), the maximal distance between the origin and every point of the pattern is plotted against the magnitude of the vector that one cannot have all geometrical, topological and combinatorial properties at the same time, but that statement has yet to be rigorously proven.
The lack of guarantee of connectivity is clearly a drawback.It seems indeed natural to require that patterns must be connected.However, even unconnected patterns may be good candidate to locally approximate tangent planes in the context of digital surface analysis.We tend to think that shape compactness is a more determinant property than connectivity for an accurate and convergent approximation of tangent plane, but this has to be further studied.
To end the section, let us mention the method developed in [12], which generates, from an input vector v, a set of points included in a digital plane of normal v.It is incremental and based on several three-dimensional continued-fraction algorithms.It generates point sets that are provably connected but that may have holes.In addition, those sets are not thick enough, e.g., they do not contain any point x such that t vx = v 1 − 1 and are composed of a great amount of points that cover the digital plane with overlaps.That is why we believe that our method has a greater practical interest in the context of digital surface analysis.

Conclusion
We have introduced a d-dimensional generalization of the Euclidean algorithm, which covers a lot of algorithms: not only Euclid's algorithm itself and the classical threedimensional Euclidean algorithms like Brun, Selmer, Poincaré or Jacobi-Perron, but also plane-probing algorithms recently developed in the context of digital geometry.All the classical three-dimensional Euclidean algorithms can even be translated into a plane-probing version due to Proposition 6 and the discussion that follows.
All those algorithms share in common that they return a sequence of elementary matrices that transforms a given normal vector into a terminal element.That output can be used together with a geometrical extension of substitutions, to generate sets of (d − 1)-dimensional faces.
The generated sets have mainly three combinatorial properties listed in Theorem 11: they can form a hierarchical set of patterns, all included in a given digital hyperplane.The pattern of the highest level is of minimal size and periodically generates the underlying digital hyperplane without hole or overlap, hence the name pattern.In addition, those patterns experimentally have an additional geometrical property when used together with a plane-probing algorithm: they are rather compact and close to the origin as illustrated in Fig. 10.It is, however, a perspective to quantify compactness and proxim-ity and to provide upper bounds on such measures.To the best of our knowledge, no other method, whether it was based on digitization or based on union-translation schemes, has all the above properties.
Plane-probing algorithms compute a normal vector by iteratively updating a triangle that locally fits a digital plane or, with slight changes, a digital surface.If the digital surface is not convex, the triangles may jump over holes or stab the digital surface, because the set of probes, which is too sparse, does not faithfully represent the digital surface.In order to correctly approximate the tangent plane in such cases, our idea is to incrementally compute a pattern instead of a triangle: new probes provide the next matrix in the sequence, we then use the proposed method to generate the next pattern and we finally check whether it fits the digital surface or not, as illustrated in Fig. 1.We plan to thoroughly study how accurately do such patterns approximate tangent planes.We hope it will improve the local analysis of digital surfaces using plane-probing.

Proposition 5
and assuming w.l.o.g. that v N = e 1 .The (d − 1) tuple B n is a basis of the lattice n .

Proposition 7
Every face in S v have its vertices in P v .Conversely, every point in P v is a vertex of a face in S v .Proof Let us introduce the following set of partial sums: ∀i ∈ {1, . . ., d}, T i := j∈J e j | J ⊆ {1, . . ., d} \ i .

Theorem 11
Let us assume that v N ∈ {1} ∪ {e 1 , . . ., e d }.The three following properties hold on the patterns defined in Definition 6: (i) ∀n ∈ {0, . . ., N }, W n ⊂ S a n and W n ⊂ S a N .(ii) ∀n ∈ {0, . . ., N }, the copies of W n translated along the lattice n (Definition 2 and Proposition 5) is a partition of S a n .(iii) ∀n ∈ {0, . . ., N }, ∀ j ∈ A d , W n has t a n e j faces of type j.

Fig. 10
Fig. 10 Patterns of normal t v = (2, 6, 15) generated by H (a), Jacobi-Perron (b) and Selmer (c).In a and b, the patterns are edge-connected, whereas in (c), there are two vertex-connected components and three . The other coordinates of Uv are the same as v.On one hand, we have Uv = v if v l ≥ 1.On the other hand, we have Uv ∈ (T ) if and only if Uv ∈ P d and there exists t ∈ T such that t ≤ Uv.The first condition is true because v ∈ P d and U is unimodular.The second condition is true if t e k (Uv) ≥ 1 (resp.t e k (Uv) ≥ 0) if T = {1} (resp.T = {e 1 , . . ., e d }).
ProofThe k-th coordinate of Uv is the result of subtracting the l-th coordinate from the k-th coordinate of v, i.e., t e k (Uv) = v k − v l

Table 2
A sequence of substitutions and other related quantities