Nullspaces yield new Explicit Runge–Kutta pairs

,

Mathematics Subject Classification (2000) 65L05 • 65L06 • 65L20 1 Introduction 1.1 Initial value problems and their approximate solutions For a system of ordinary differential equations, consider an autonomous initial value problem (IVP) with a vector solution y(x) beginning at the point (x 0 , y 0 ).The solution is required in an interval [x 0 , X], and this problem is denoted by where y 0 ∈ R m , and the function f : R m → R m is assumed to be sufficiently smooth.At each point of its domain of definition, the solution is assumed to have a Taylor series solution, and the accuracy of a method is determined by its order, the number of terms of the Taylor solution reflected by the approximation.(Addition of a single differential equation, u ′ (x) = 1, u(x 0 ) = x 0 , reduces a non-autonomous system to one that has this form.)Let N be a positive integer and define a stepsize h = (X − x 0 )/N , and a uniform grid x n = x 0 + nh, n = 0, 1, . . ., N .The accuracy of a method is partly a function of its order of accuracy, and partly of the magnitudes of the coefficients multiplying the local truncation error terms.Butcher [1] has shown for a method to have order p for (1), the coefficients have to be chosen to satisfy N p algebraic 'order conditions' in the coefficients of the method.To approximate the solution y = y(x) ∈ R m of the IVP(1) together with estimates of the local error in each step, consider an s-stage explicit Runge-Kutta pair of methods computed specified by j ), i = 1, 2 . . ., s, i ), i ), n = 0, 1, 2, . . ., N − 1. ( Within each step of length h, the s values Y [n] i form low order approximations to the solution at {x 0 + h(n + c i ), i = 1..s}, and coefficients {a i,j , b i , b i } with c i = Σ i−1 j=1 a i,j are to be chosen specifically to obtain approximations y n+1 of local order p and y n+1 of local order p − 1 at each endpoint, x 0 + nh.Parametric families of such methods have been derived by the author [10], [11], with particular robust and efficient examples described more recently in [12], and coefficients of these methods are displayed on the author's website.Derivations in those papers form basic tools for new pairs obtained here.
With appropriate allowances for a system of differential equations, examples will be represented as Butcher tableaus in the form c A b b Table 1: A Butcher tableau for an error-estimating Runge-Kutta pair

Alternate Order Conditions
The standard formulation of conditions required to be satisfied for an s-stage Runge-Kutta method to have order p were tabulated by Butcher [1] using an isomorphic mapping from the set of rooted trees having up to p nodes onto polynomial constraints on the coefficients {b i , a i,j , c j } of a method.For each rooted tree of q nodes, 1 ≤ q ≤ p, the constraint is determined in two parts: first, assign sequentially to the nodes of a tree t, indices i, j, k, ... so that the root has index i, and the tree has q distinct indices.For the elementary differential Φ(t), assign b i to the root; for each internal edge i-j occurring, assign a i,j to node j, and to each terminal node l attached to node k, assign c k .Now, Φ(t) is equal to the sum over all indices, of the products of the coefficients assigned to all nodes.For example, for the 5-node tree t 11 , the first three parts illustrates this partition to give Φ(t): This constraint is completed by equating Φ(t) to the reciprocal of an integervalued function t! identified on this rooted tree.For this, assign the integer 1 to each terminal node of the tree; to each other node assign 1 plus the total number of its descendent nodes.Then t! is the product of all integers assigned to the tree.For the example above, t! = 1.1.1.3.5 = 15, and the full constraint for t 11 appears in equation (3).We now observe that the standard order condition for a general tree t is Below, for each tree we shall specify an alternate order condition to (4) as a linear combination of Φ(t) for two different trees.
For the column vector, e = [1, .., 1] t , consisting of s ones, each order condition can be represented using vectors {b, e}, a matrix A being strictly lower triangular for explicit methods, and a diagonal matrix C for which Ce = c.The left side of each Φ(t) is a product of b followed by products of powers of A and C selected according to the number of times and positions symbols a i,j and c j occur in (4), followed by one copy of e either for each side branch having at least two contiguous nodes or else for the final branch; the right side remains unchanged.Hence, the order condition for t 21 can be written as: The more general order condition may be written in the form bAC k1 (AC l1 e)..AC km (AC lm e)AC k−1 = 1/t!, where t! is specified by the rule above for the right side of a standard order condition.Factors AC li can be more general as indicated following (7) below.Now, we specify a particular set of alternate equivalent order conditions.That is, every method of s stages has order p if and only if its coefficients satisfies both the standard order conditions tabulated by Butcher, and this set of alternate order conditions.In fact, there are several ways to select a set of 'alternate order conditions'.The approach is to use a (backward) sequence of sets of linear combinations of standard order conditions to eliminate the rational fraction 1/t! from the right sides of each of the order conditions one by one (excepting only the first order condition, namely Σ i b i = 1).
To specify these alternate order conditions for a method of order p, the strategy for each value of p is to consider the standard conditions in Butcher [2] [page 66] in reverse sequence.Start from tree number N p , and after some alternate conditions have replaced standard order conditions, consider the next standard order condition corresponding to a tree t.If t has at least one branch leaving the root having at least two contiguous edges, the standard order condition will contain a factor a i,j c k−1 j with k maximal.Subtract from this standard order condition for tree t, 1/k times the previous order condition for t ′ which differs only from that of t by having the stated factor replaced by the factor c k i .This yields the alternate order condition (where I1,I2 include all factors in the second part t! in (5) not arising from the factor a i,j c k−1 j ), and this constraint is homogeneous in the coefficients.Each of these alternate order conditions wlll now take the form bAC k1 (AC l1 e)..AC km (AC lm e)q [k] = 0, where we define stage-order or 'subquadrature' expressions as Formula ( 6) has different forms depending upon the type of tree it arises from.Trees with at least two branches each having two or more contiguous nodes will generate factors such as AC li e or more general forms (eg.AAC li e not shown), and give order conditions of type D below.If there is exactly one single branch leaving the root having at most two contiguous nodes, this order condition will be type C, and hence equation ( 6) will be bC k1 q [k] = 0.A form with three contiguous nodes would have the form bC k1 AC k2 q [k] = 0. Otherwise, t will be a bushy tree of k − 1 nodes attached to the root.For k > 1, the corresponding quadrature order condition Q k = bC k−1 e − 1 k will be replaced by the alternate order condition which may be also be written in the form bC k−1 (kC − (k − 1)I)e = 0.Each of these alternate order conditions has the form β i * α j = 0, 2 ≤ i + j ≤ p where β i is a product of b with i − 1 copies of A and/or C, and α j is a polynomial in A and/or C of terms up to degree j.Hence, the coefficients of a method of order p must have β i orthogonal to α j , and as each is obtained as a single weighted difference of two standard order conditions, we denote these order conditions as Singly-Orthogonal Order Conditions (SOOC).We will explore this relationship in more detail later.As a simple illustration, the (scaled) difference of the standard order conditions for t 7 and t 5 , yields the SOOC for t 7 as b * q [3] = 0.This procedure changes each of N p − 1 standard order conditions to a singly-orthogonal order condition (SOOC) by essentially adding a multiple of a standard order condition lower in the sequence, and hence this process is totally reversible.This establishes Theorem 1 An s stage method is of order p if and only if its coefficients (b, A, C) satisfy b * e = 1, and the N p − 1 singly orthogonal order conditions.

Proof :
For a tree with at least one internal node, consider the partitioning t = L(t) * R(t) in which R(t) is a subtree obtained by pruning one branch from the root of t.Suppose R(t) contains a penultimate node j with parent i (possibly the root) having k − 1 terminal nodes as children.Now, let t = L(t) * R( t) be an alternate tree for tree t in which R( t) has k nodes attached to node i to replace edge i − j and the k − 1 terminal nodes of R(t).With this replacement by R( t), it follows that in the integers giving t!, the value R(t)! = k * R( t)!.This same replacement also implies that t! = k * t!.This in turn implies that, Φ(t) = 1/t!if and only if Φ( t) = 1/ t!.Accordingly, for any particular tree t, we could replace the standard order condition by the requirement that because the standard order condition for t lies within the remaining (lower sequenced) order conditions.(Observe as above, that if R(t) has at most two contiguous nodes, R( t) becomes a set of k nodes attached to the root of L(t), and the result holds in the same way.)A similar argument shows that ( 6) and ( 8) are necessary for a method to be of order p.Hence, along with the first order condition, exchanging each standard order condition by the corresponding alternate order condition retains a necessary set of conditions for the method to have order p.Standard order conditions establish that the singly-orthogonal order conditions hold.On the other hand, if the singly-orthogonal order conditions hold, the process described above can be reversed starting from the first order condition and the lowest sequenced SOOC to establish that the standard order conditions hold as well.

Types of order conditions
Order conditions may be partitioned according to the types of problems for which they determine the accuracy of corresponding methods, and there are four distinct types of order conditions.To solve y ′ = f (x), the weights and nodes must satisfy the quadrature conditions, each of which corresponds to a (bushy) tree of k−1 nodes each directly connected to the root.To solve linear constant coefficient non-homogeneous problems y ′ = Ky + f (x), order conditions corresponding to tall trees in which each terminal node is connected to the single penultimate node, must be satisfied.To solve variable coefficient linear problems, order conditions in which each terminal node is connected to a single node of a tall tree must be satisfied.For all other problems, the order conditions required correspond to trees that have at least two branches each containing two contiguous edges.Typical examples of the four types of standard order conditions are: Lemma 1 Suppose that the coefficients satisfy either b i = 0 or q [j] i = 0, 0 ≤ j ≤ (p − 1)/2, for each i=1..s.Then each order condition of type D collapses to one of type C.
Proof : For any tree with two or more branches from an internal node or the root each having at least two contiguous edges, assign one maximal branch to be that for which conversion to the alternate form including q [k] occurs.Any other branch may contain a factor a i,l c j−1 l with j ≤ k.As this tree has no more that p nodes and each branch excludes the root, it follows that the total number of nodes in the two branches together satisfies j + j ≤ j + k ≤ p − 1, so that j ≤ (p − 1)/2.The constraints in the lemma now imply that either b i = 0, or else a term Σ l a i,l c j−1 l in the recessive branch can be replaced by c j i /j.A succession of these replacements reduces a type D condition to one of form (6) in which all l i have vanished, that is type C.
⊓ ⊔ Now, we illustrate the singly-orthogonal forms for each of the four different types of order conditions.These utilize vector-matrix forms and the stageorder expressions : The orthogonality within each singly-orthogonal order condition might be determined in more than one way.Here, a separation of each SOOC into vectors β i and α j is denoted by ' * ′ to identify the orthogonality; for clarity, this same partition is also indicated (temporarily) here by using boldface for the first vector and italics for the second.2 illustrates how the numbers of the four types of order conditions increase with increasing order.Only the first order condition Σ i b i = 1 does not yield a SOOC, so there are N p − 1 singly-orthogonal order conditions to solve.(Since b i appears linearly in each SOOC, vector b can be scaled so that In contrast to form (3), the right side of each standard order condition can be represented using multiple integrals.For example, Observe that each occurrence of b or A in Φ(t) is replaced by a definite integral, and each power of C is replaced by the same power of some c.This form is often convenient when formulating the order conditions for direct solution.
3 Solving the order conditions

Methods for linear constant coefficient non-homogeneous problems
We have already seen that for constraints on b i and the stage-orders, conditions D collapse to conditions C. We shall assume that sufficient conditions hold for this to occur.We now show how to satisfy conditions A and B together for a p-stage method of restricted order p.While such methods can be used to solve a problem of form y ′ = Ky + f (x), the approach was used to derive classical pairs ( [10]), and will be used here as a skeleton to derive new methods.
Lemma 2 There exist p-stage methods of order p for linear constant coefficient non-homogeneous problems of form where K is a constant matrix.
Proof : (a) For p distinct nodes c i , there is a unique solution for b having p entries of (b) More generally, for p distinct nodes c i , define for each k = 1, .., p, Then, for each k = 1, .., p, unique values Now form a strictly lower triangular array L with L p+2−k,i , i = 1, .., p + 1 − k, making up the elements of row p+2−k, k = p+1, .., 2, and row p+1 containing L p+1,i = b i , i = 1, .., p.With L p+2−k as defined by (12), it now follows that coefficients a i,j can be computed sequentially up back diagonals of L starting from the right-most diagonal using L q+1,j a j,q−k /L q+1,q , q = p + 1, p, .., k + 1, for each k = 1, .., p − 1.This gives the weights b i = a p+1,i , i = 1, .., p, and the coefficients a i,j , 1 ≤ j < i ≤ s of a p-stage explicit Runge-Kutta method that satisfies all of conditions A and B to order p for this restricted class of problems.

⊓ ⊔
Table 3 following provides coefficients of an 8-stage method of order 8 for linear constant coefficient non-homogeneous initial value problems.While order p=8 can be illustrated with simple examples, there is no choice of the arbitrary nodes that will lead to an embedded method of order 7 using the six leading stages of the (8,8) method and only one additional stage.Possibly other approaches to error estimation could be developed for such methods.The motivating theme of this study is to determine the structure of a new family of (13,7-8) pairs of explicit Runge-Kutta methods.Initially, we derive the main 12-stage method of order 8 for such problems.For this, the L-tableau of Lemma 2 has nine rows, and we split it in two ways.First, we insert four new unspecified rows after the first row, and then we insert four columns after the first column.New column values for elements L i,j , are inserted so that L i,j = 0, i = 9, .., 13, j = 2..5. ( This expanded L-tableau will have the same values in rows 10 to 13 with columns 2 to 8 moved four columns to the right.Values of a i,j for rows 2 to 9 of matrix A of a method will be computed in advance to make stage-order values equal to zero, and as well so that the remaining order conditions of type C will be satisfied.After computing these values for rows 2 to 9 of A, values of L i,j in rows 10 to 13 will be used to compute corresponding values of a i,j in these rows using back substitution.

Mutually orthogonal matrix representations
We now extend the concepts of β i and α j to be matrices of row and column spaces respectively.For β i , there is a very easy extension.Definition 1 For each i, we define β i to be a matrix of s columns whose rows are left parts of SOOCs having products up to i factors.
For example, β i may contain b and other rows as appropriate.One possible choice for β 4 is, , but this matrix could contain more rows such as bACA or bCAC, or from conditions D, (bC) * (ACe) (where * denotes elementwise multiplication).It will be convenient to let βi denote a maximum number of different rows each product of which contains up to i factors.
Inconveniently, an extension to α j , is more restrictive.
Definition 2 For each j and β j , we define α j to be a matrix of s rows whose columns are right parts of SOOCs of sums of products of up to j factors, such that each column of α j is orthogonal to all rows of β j , and we use ᾱj to designate a maximum number of different columns.
It turns out that each α j will contain right parts of SOOC conditions B, C and D, but the only right parts arising from conditions A will be one based on a Jacobi polynomial of degree j (orthogonal to all polynomials of lower degree).Hence, matrix α j will not contain e = [1, ..., 1] t , but α 1 could contain (I−2C)e, and α 2 could contain (I−6C+6C 2 )e and/or q [2] .Another example of α 2 arises from equations (15) below.An example of α 3 is ᾱ3 = [q [2] , Cq [2] , Aq [2] , q [3] , We observe now that any two matrices β i and α j are pairwise orthogonal whenever 1 < i ≤ j and i + j ≤ p.Hence, each row of β i is a left nullvector of α j , and each column of α j is a right nullvector of β i .

The Nullspace Theorem
From these definitions, we have the following: Theorem 2 For an s-stage method of order p for (1), it is necessary for each i ≤ j that To derive methods, we might try to characterize coefficients of a method that possess such orthogonality properties.To this end, orthogonality properties of some new methods found using Test21 have been studied.For these methods, it turns out that there is a surprising two-parameter partitioning of matrix A that will be displayed later.
To illustrate how this result is useful, consider the four order conditions for three-stage methods of general order three: Four solutions with b 3 ̸ = 0 exist (see Butcher, [2], p. 63).We can choose these are orthogonal and b ̸ = 0, so that α 2 must have rank 2. Hence, α 2 contains a row or column of zeros, or else has linearly dependent rows, but not all occurrences lead to methods.The four solutions that Butcher reports arise from the last two columns being proportional, the last two entries of row three being zero, and two occurrences of the second column being zero.

Twelve stage methods of general order eight
In [10] and [11], the author derived a parametric family of (13,7-8) Runge-Kutta pairs.That is, for a twelve stage method of order eight, the first ten stages plus one additional thirteenth stage were used to obtain a second, different method of order seven [10]: the solution of an IVP could be propagated by either method, and the difference of two approximations at each step would provide an estimate of the local truncation error that could be utilized to control the stepsize to minimize local errors.In deriving these pairs, the main focus was on the (12,8) method, and that focus continues here.Assume the nodes are constrained as in Theorem 3(1.) below to satisfy some stage-order conditions, and one order condition of type C when three other conditions hold.
The stage orders (SO) are emphasized for each stage.For the new methods, this anticipates a reduction of stage-orders from SO=4 to SO=3 in stages 7 to 12.This reduction is to be replaced by orthogonality conditions on the corresponding stages, but even more is needed.
Even with this reduction in the stage-orders, there remain a number of identities among components of α 4 .Because a i,2 = 0, i > 3, a i,3 = 0, i > 5 and the stage-orders, for ê = [0, 0, 0, 0, 0, 1, .., 1] T , the following vectors are equal: For ē = [0, 0, 0, , 1, .., 1] T , we find in addition that These identities can be utilized in showing β i * α j = 0 after showing the orthogonality for one α j from either set of identities.We refer to the use of lower-orders and these as proofs using stage orders.
For each new method of order eight, it is assumed that b i = 0 or q [k] = 0, k = 1, 2, 3 implying that Lemma 1 is valid so that all conditions D collapse to conditions of type C. Otherwise, in contrast to assuming q [4] = 0, it will be assumed that q [4] lies in the nullspace of β4 .A second non-trivial nullvector of β4 is the Jacobi polynomial J 4 (C)e of degree 4 on [0,1] that is orthogonal to all polynomials of lower degree.In particular, observe using order conditions of type A with (10) that As well, order conditions of type B imply that bA j J 4 (C)e = 0, j = 2, 3, by direct computation, with ( 12), ( 13) and ( 16) so that J 4 (C)e lies in the nullspace of β 4 .For J 4 (C)e to lie in the nullspace of β4 , we shall show later that the rowspace of β4 usually lies in the span of the rows of β 4 .
For several examples of these new methods found using Test21, some properties were determined computationally.These properties guide the statement and proofs for Theorem 3.
-The six rows of β 4 are linearly independent, and so they span the rowspace of β4 .Hence, the remaining rows are linearly dependent on these.
5. Choose L i,j by ( 13) with L i,j = 0, j = 2..5 (14), and use back-substitution to compute the weights b i and coefficients of stages 12,11 and 10.
Then, for most choices of the arbitrary nodes, the method has order 8.
Proof The first two nodal constraints of item 1. are needed to allow the several stage-order conditions of items 2. and 3. to be satisfied.The constraint on c 9 and item 4. are used to satisfy bC 2 AC 4 = 1/40 as shown in [10].Conditions A and B follow from b i = 0, i = 2, .., 5, constraints on q [k] in item 3., and values of L i,j = 0. Also, values for b i , i = 2, .., 5, and values of q [k] i assumed to be zero force conditions D to collapse to conditions C .As well, the same values with item 4. imply that columns 2 to 5 of β4 are all zero.
Next, the rows of β 4 are usually linearly independent.For distinct nodes, linear independence of {b, bC, bCC, bCCC} follows from a van der Monde matrix determined by those values of b i that are non-zero.Assume bAA is a linear combination of the previous four rows.Hence, for bAA = K1 * b + K2 * bC + K3 * bCC + K4 * bCCC, post-multiplication by C k , k = 0, .., 4, with conditions A and B lead to a system of five equations: The unique solution of this system leads to bAA = b(I − C) 2 /2.However, bAA 11 = 0, while b 11 (1 − c 11 ) 2 ̸ = 0 usually, leading to a contradiction.Hence, bAA is usually linearly independent of {b, bC, bCC, bCCC}.A similar argument with bAAA 10 = bAAA 11 = 0 shows that bAAA is usually linearly independent of the four rows and bAA.For this, post-multiplication by C k e, k = 0, .., 4 of a linear combination for bAAA and stage 11 leads to This requires the linear polynomial in braces to be orthogonal to π(c)(c − 1), and for each viable choice of all nodes except c 10 , this occurs for only a finite number of values of c 10 .Hence, this establishes that the rowspace of β 4 is usually a linearly independent set of six rows.
To establish conditions C hold, formulas among A, C, b are used, and these are considered separately.Because of values in columns 2 to 5 of β4 are all zero, and otherwise stage-orders are at least three, only order conditions with leading parts in β 4 need to be considered.Formulas ( 12) and ( 13) for L i,j for i = 12, 13, imply that bA = b(I − C). (16) Now, substitution of bA or bC from (16) shows that each of bA, bCA, bAC, bCAA, bACC lies in the rowspace of β 4 .Post-multiplication of ( 16) by AC shows that the sum bAAC+bCAC=bAC lies in this rowspace; hence both or neither of bAAC and bCAC lie in this rowspace.Similarly, both or neither of bACA and bCCA lie in this rowspace.Since the nullspace of β 4 contains six linearly independent vectors {e i , i = 2..5, J 4 (C), q [4] }, and has twelve nonzero columns, it has rank at most six, and usually contains six linearly independent rows of β 4 .Hence, we might expect that the four vectors bAAC, bCAC, bACA and bCCA will also lie in this rowspace.We now show that each of these is usually a linear combination of the rows of Now, the orthogonality of β 4 to α4 , and otherwise stage-order conditions to order 3 imply that all of conditions C hold, and hence the method has order eight.

Algorithm for new methods
In contrast to the algorithm for classical (12,8) methods above, we begin an algorithm for computing coefficients of nullspace (12,8) methods.Initially, we need to compute those multiples of a76, an arbitrary value, that occur in each non-zero entry in column six of matrix A. These values are designated as C1 i , i = 7, .., 12 with C1 7 = 1, and form the non-zero entries of a right nullvector of the rows of β 4 .More detail of this computation appears later in Theorem 4. To begin the algorithm, impose the same nodal constraints given for Theorem 3; these are the same for a (12,8) method of a pair found in [10]: -Weights b i = L 13,i , i = 1..12.SO=8 -Stages 12, 11, 10: Use back-substitution on L 14−k,i , SO=3 i=14-k,..,1, k=2,..,4, to get a 14−k,i , i = 13 − k, .., 1, k = 2, .., 4.
It follows from most if not all values of {bAA, bA, b} that the matrix must imply all values in braces are zero, and hence q [4] as a multiple of C1 is a nullvector of β 4 .As a postnote, we remark that in computing examples of these new nullspace methods, each vector of components of a76 in columns 1,4,5,6 of rows 7 to 12, is a multiple of the corresponding subvector of q [4]  formed by rows 7 to 12.In summary, stage orders of stages 7 to 12 are reduced to SO=3, but stage 6 retains SO=4.In place of stage order 4, C1 has been designed as a nonzero nullvector of β 4 , and then forced to be a multiple of q [4] .As stated previously after the first algorithm in Section 5, even more is needed.

Structure of new methods
Theorem 3 does not quite define an algorithm for the new methods of order eight.When Test21 was applied using consistent sets of nodes some problems occurred.
2. Even for this choice of c 6 , only a few sets of nodes allowed the Test21 algorithm in MAPLE to compute coefficients of a new method.
3.More detail on making q [4] orthogonal to β4 is needed.
These problems motivated an attempt to find an algorithm to directly compute exact coefficients for each method of this new family.
Theorem 4 Assume {b, A, c}, form coefficients of a traditional twelve-stage method of order eight.Assume that the last six columns of β4 is a 20×6 matrix with rank 5. Define two column and two row vectors by -C1 i = 0, i = 1..6, C1 7 = 1, and otherwise, C1 is a nonzero solution of β4 .C1 = 0.
Proof The first part of the proof establishes that C1 is a right nullvector of β4 , R1 is a left nullvector of {e, Ce, C 2 e, ACe}, C2 is a right nullvector of β3 , and R2 is a left nullvector of {e, Ce, C 2 e, ACe, C 3 e, ACCe, CACe, AACe}.
We shall refer to a method with format (17) as a nullspace method.The type A (quadrature) order conditions for the nullspace method are identical to those for the traditional method, and hence are valid.We now show that each standard order condition of type B,C, or D for a method with matrix A has the same value as the corresponding order condition for the method with matrix A. To do this, we consider the left side of a standard order condition (5) with each A replaced by (17).We then expand the resulting expression, and consider each term separately.For clarity, we first consider an order condition with only one occurrence of A, and for 0 ≤ k + l ≤ 6, this gives bC k AC l e = bC k AC l e + a 7,6 bC k .C1.R1.C l e + a 8,7 bC k .C2.R2.C l e = bC k AC l e.
because the latter two terms evaluate to zero.Observe, either k < 4 or l < 3: if k < 4, then bC k .C1 = 0, or else l < 3 and R1.C l e = 0, so the second term is zero.As well, either k < 3 or l < 4, and then bC k .C2 = 0 or R2.C l e = 0 respectively, so the third term is zero.For a more general order condition of type B,C, or D in which A must occur more than once, the expansion gives a single term in which only A occurs, or else each pair C1,R1, and C2,R2 occurs at least once.In each term of the expansion, the first occurrence of C1 or C2 with its orthogonality to β4 or β3 respectively will give zero, or else a latter occurrence of R1 or R2 with its orthogonality to terms in C k , AC k−1 , etc. will give zero.This will leave exactly an equality of the left side of a standard order condition value from (5) in A to that with the left side of the same order condition of (5) with A replacing each occurrence of A. This establishes that each order condition (5) for the method (b, A,c) is satisfied because it is equivalent to the corresponding order condition (5) for the method (b,A,c).Hence, coefficients of (b, A,c) yield a new method for each choice of a 7,6 and a 8,7 .
⊓ ⊔ Theorem 5 Suppose a method is chosen by Theorem 4 with c 6 = 1/2 and almost any constants a 7,6 a 8,7 .Then columns 7 to 12 of β4 is a 20 × 6 matrix of rank 5, and the method is a 12-stage method of order 8.
Proof It has already been established that the six rows of β 4 usually spans β4 , and that as well, both bAAC and bACA must be in this span.To show that the six specified columns of β4 has rank 5, it is sufficient to show that columns This requirement occurs if each 3 × 3 submatrix of RB has a determinant equal to zero.For this it is sufficient to show the determinant is zero for the three sets of rows {1, 2, 3}, {1, 3, 4} and {1, 3, 5}, for in this case all rows of RB will be linear combinations of rows 1 and 3, and RB will have rank 2. All values needed to compute these rows generically can be found using formulas (12) with (13).
For convenience let By the distinctness of nodes 7..12, c 10 ̸ = 1 ̸ = c 11 , so for all three determinants to be zero, it follows that c 6 = 1/2, or else c 6 = 1/2 ± √ 21/14.With either of the latter two choices, some difficulties with applying the algorithm of Theorem 3 arise; otherwise, these latter two choices have been shown to lead to (11,8) methods found by Cooper and Verner [4], and Curtis [5].Hence, for the methods proposed here, it is necessary that c 6 = 1/2.With this choice, the algorithms of Theorem 3 and Theorem 4 can be applied to find examples in this new family of (8,12) methods.
⊓ ⊔ Comment: .Even so, an example for one choice of nodes with c 6 = 171/410 (and arbitrary a76) was found that gave the 6 × 6 submatrix from columns 7 to 12 of β 4 the rank 5, the right six columns of all of β4 the rank 6, and which satisfied all the other nullspace constraints exactly, but led to a 12-stage method only of order 7. (With the single required value of a76, the same set of nodes led to a classical 12-stage method of order 8 for any value of a87.) 8 An embedded method of order seven For each traditional (12,8) method with c 6 = 1/2, and any value of a 8,7 , the previous sections yield a new family of methods in the parameter a 7,6 .While we have exchanged the freedom to choose an arbitrary value for c 6 to make a 7,6 a parameter, we have derived a new family of explicit Runge-Kutta methods.
For each (12,8) method, it remains to show that a new node, c 13 = 1, and appropriate coefficients for the corresponding thirteenth stage, can be used with the first ten stages to obtain a different (embedded) method of order 7.The weights b i , i = 1, .., 10, 13 provide an order 7 quadrature rule for the nodes c i , i = 1, .., 10, 13 as follows.Choose b i = 0, i = 2, .., 5, and the remaining seven weights may be determined uniquely by integrating the interpolation polynomial on the seven restricted nodes {c i , i = 1, 6, .., 10, 13}: Coefficients for the new stage will be determined by assuming a 13,11 = a 13,12 = 0, and then using back substitution on bA j = 0, j = 2..5, Lemma 3 If, for any classical or nullspace twelve stage method of order eight satisfying β i .α j = 0, i ≤ j, i + j ≤ 8, an embedded method is selected using (18) and ( 19), the embedded method has order seven.
Proof This proof utilizes the orthogonal properties of the main method to establish that with weights of the embedded method and the new stage, the embedded method has order seven.Equation (18) implies that and this equation with (19) implies that Next, we show that bA lies in the span of β 3 .By direct computation with ( 12), ( 13), ( 14) and ( 21) it follows that The expression in braces is zero if i = 11 or 12, and vacuous if i = 13.As well, the terms for i = 2, .., 5 are zero because of the restrictions imposed on b, bAA, bA.The remaining matrix in {c i , i = 1, 6..10} is van der Monde with distinct nodes, so it has full rank=6.Accordingly, each remaining term in braces is zero.Hence, Since the right side lies in the rowspace of β 4 , so also does bAA, and so bAA is a left nullvector of α 4 .
Post-multiplication of (24) by each of AC, CA, AA, CC gives terms of degree 5, and these lead to the correct values on the right sides.These conditions are sufficient to show that b and the new stage 13 can be used with the leading ten stages of the method of order 8 to obtain an (embedded) method of order seven by back substitution.

Efficiency of the new methods
The family of new pairs might be considered as one that intersects with the family of traditional pairs [10] when c 6 = 1 2 .Sets of coefficients for (nearly) optimal traditional pairs have been computed and placed on the author's website.One possibility is that among the new pairs, the best might be competitive.It is generally accepted that of any pair of explicit Runge-Kutta methods of successive orders of accuracy, that of higher order is the one accepted for propagation.The difference of the two approximations is used as an estimate of the local error.A good measure of the effectiveness of this process as one that gives accurate solution values to most initial value problems is the 2-norm of the vector of coefficients of order p+1 of the method of order p.If the step-size h is small relative to the interval of solution, this indicates the magnitude of the local error for an IVP.Table 4 reports many well-known (13,(7)(8) efficient pairs together with three leading nodes, the 2-norm of the higher order leading error term, the largest coefficient (D), and the interval of stability of the higher order method of each pair.Pairs by this author are indicated by year derived, with characteristic values of the best nullspace method found placed in the last line.Other methods reported appear in Hairer, Nørsett and Wanner [7] (pages 181-185 -a (12,8-6) pair), Sharp and Smart [9], and as DVERK78 in the MAPLE computing system.Table 5: A nullspace (13,(7)(8) pair for which the main stage-order is three.

Appendix -an Optimal pair
In contrast to the previous pair, the nodes may be changed to optimize the pair -in the sense that the 2-norm of the local truncation error coefficients of the higher order method is minimized.The final line in Table 4 states properties of this pair.Slight improvements on this pair are possible by reducing c 2 towards zero, but such improvements are marginal.Table 6 records coefficients of {b, b, A, c} when a 7,6 = 0 and a 8,7 = 0 for the optimal pair.A more general pair is obtained when matrix A is computed with almost any values of a 7,6 and a 8,7 using the four following vectors in (17): 51 = 115 .− − − −− −− −− −− −− T otals 8 21 99 72 = 200 −1

7 to 12 of the 8 × 6
matrix of β 4 together with those columns of bAAC and bACA has rank 5. Consideration of bAAC and bACA is needed to ensure they exist for c 6 = 1/2 and most choices of the arbitrary nodes.Now, row reduction of this 8 × 6 matrix cascades easily to show that this is equivalent to showing that the 5 × 3 matrix RB has rank 2 where the five rows of RB are -b i (c i − c 10 )(c i − c 11 )(c i − 1), -bAAA i , -b 10 (c 10 − c 11 )(c 10 − 1).bAA i − bAA 10 .bi (c i − c 11 )(c i − 1), -bAA i (c i − c 10 ), -bA(C − c 11 )A, for i = 7, 8, 9.

Table 2 :
The numbers of different types of Order Conditions Table