Optimistic Optimisation of Composite Objective with Exponentiated Update

This paper proposes a new family of algorithms for the online optimisation of composite objectives. The algorithms can be interpreted as the combination of the exponentiated gradient and $p$-norm algorithm. Combined with algorithmic ideas of adaptivity and optimism, the proposed algorithms achieve a sequence-dependent regret upper bound, matching the best-known bounds for sparse target decision variables. Furthermore, the algorithms have efficient implementations for popular composite objectives and constraints and can be converted to stochastic optimisation algorithms with the optimal accelerated rate for smooth objectives.


Introduction
Many machine learning problems involve minimising high dimensional composite objectives (Dhurandhar et al., 2018;Lu, Lin, & Yan, 2014;Ribeiro, Singh, & Guestrin, 2016;Xie, Bijral, & Ferres, 2018). For example, in the task of explaining predictions of an image classifier (Dhurandhar et al., 2018;Ribeiro et al., 2016), we need to find a sufficiently small set of features explaining the prediction by solving the following constrained optimisation problem min x∈R d l(x) + λ 1 x 1 + λ 2 2 x 2 2 s.t. |x i | ≤ c i for all i = 1, . . . , d, where l is a function relating to the classifier, λ 1 controls the sparsity of the feature set, λ 2 controls the complexity of the feature set, and c 1 , . . . , c d are the ranges of the features. For l with a complicated structure and large d, it is practical to solve the problem by optimising the first-order approximation of the objective function (Lan, 2020). However, the first-order methods can not attain optimal performance due to the non-smooth component λ 1 · 1 . Furthermore, the purpose of introducing the 1 regularisation is to ensure the sparsity of the decision variable. Applying the firstorder algorithms directly on the subgradient of λ 1 · 1 does not lead to sparse updates (J.C. Duchi, Shalev-Shwartz, Singer, & Tewari, 2010). We refer to the objective function consisting of a loss with a complicated structure and a simple (possibly non-smooth) convex regularisation term as a composite objective. This paper focuses on the more general online convex optimisation (OCO), which can be considered as an iterative game between a player and an adversary. In each round t of the game, the player makes a decision x t ∈ K. Next, the adversary selects and reveals a convex loss l t to the player, who then suffers the composite loss f t (x) = l t (x) + r t (x), where l t : K → R is a convex function revealed at each iteration and r t : X → R ≥0 is a known closed convex function. The target is to develop algorithms minimising the regret of not choosing the best decision x ∈ K An online optimisation algorithm can be converted into a stochastic optimisation algorithm using the online-to-batch conversion technique (Cesa-Bianchi, Conconi, & Gentile, 2004), which is our primary motivation. In addition to that, online optimisation also has many direct applications, such as recommender systems (Song, Tekin, & Van Der Schaar, 2014) and time series prediction (Anava, Hazan, Mannor, & Shamir, 2013).
Given a sequence of subgradients {g t } of {l t }, we are interested in the so-called adaptive algorithms ensuring regret bounds of the form O( T t=1 g t 2 * ). The adaptive algorithms are worst-case optimal in the online setting (McMahan & Streeter, 2010) and can be converted into stochastic optimisation algorithms with optimal convergence rates (Cutkosky, 2019;Joulani, Raj, Gyorgy, & Szepesvári, 2020;Kavis, Levy, Bach, & Cevher, 2019;Levy, Yurtsever, & Cevher, 2018). The adaptive subgradient methods (AdaGrad) (J. Duchi, Hazan, & Singer, 2011) and their variants (Alacaoglu, Malitsky, Mertikopoulos, & Cevher, 2020;J. Duchi et al., 2011;Orabona, Crammer, & Cesa-Bianchi, 2015;Orabona & Pál, 2018) have become the most popular adaptive algorithms in recent years. They are often applied to estimating deep learning models and outperform standard optimisation algorithms when the gradient vectors are sparse. However, such property can not be expected in every problem. If the decision variables are in an 1 ball and gradient vectors are dense, the Adagradstyle algorithms do not have an optimal theoretical guarantee due to the sub-linear regret dependence on the dimensionality.
The exponentiated gradient (EG) methods (Arora, Hazan, & Kale, 2012;Kivinen & Warmuth, 1997), which are designed for estimating weights in the positive orthant, enjoy the regret bound growing logarithmically with the dimensionality. The EG ± algorithm generalises this idea to negative weights (Kivinen & Warmuth, 1997;Warmuth, 2007). Given d dimensional problems with the maximum norm of the gradient bounded by G, the regret of EG ± is upper bounded by O(G √ T ln d). As the performance of the EG ± algorithm depends strongly on the choice of hyperparameters, the p-norm algorithm (Gentile, 2003), which is less sensitive to the tuning of hyperparameters, is introduced to approach the logarithmic behaviour of EG ± . Kakade, Shalev-Shwartz, and Tewari (2012) further extends the p-norm algorithm to learning with matrices. An adaptive version of the p-norm algorithm is analysed in Orabona et al. (2015), which has a regret upper bound proportional to x 2 p, * T t=1 g t 2 p for a given sequence of gradients {g t }. By choosing p = 2 ln d, a regret upper bound can be achieved. However, tuning hyperparameters is still required to attain the optimal regret O( x 1 ln d T t=1 g t 2 ∞ ). Recently, Ghai, Hazan, and Singer (2020) has introduced a hyperbolic regulariser for online mirror descent update (HU), which can be viewed as an interpolation between gradient descent and EG. It has a logarithmic behaviour as in EG and a stepsize that can be flexibly scheduled as gradient descent. However, many optimisation problems with sparse targets have an 1 or nuclear regulariser in the objective function. Otherwise, the optimisation algorithm has to pick a decision variable from a compact decision set. Due to the hyperbolic regulariser, it is difficult to derive a closed-form solution for either case. Ghai et al. (2020) has proposed a workaround by tuning a temperature-like hyperparameter to normalise the decision variable at each iteration, which is equivalent to the EG ± algorithm and leads to a performance dependence on the tuning. This paper proposes a family of algorithms for the online optimisation of composite objectives. The algorithms employ an entropy-like regulariser combined with algorithmic ideas of adaptivity and optimism. Equipped with the regulariser, the online mirror descent (OMD) and the follow-the-regulariser-leader (FTRL) algorithms update the absolute value of the scalar components of the decision variable in the same way as EG in the positive orthant. The directions of the decision variables are set in the same way as the p-norm algorithm. To derive the regret upper bound, we first show that the regulariser is strongly convex with respect to the 1 -norm over the 1 ball. Then we analyse the algorithms in the comprehensive framework for optimistic algorithms with adaptive regularisers (Joulani, György, & Szepesvári, 2017). Given the radius of decision set D, sequences of gradients {g t } and hints {h t }, the proposed algorithms achieve a regret upper bound in the form of O(D ln d T t=1 g t − h t 2 ∞ ). With the techniques introduced in Ghai et al. (2020), a spectral analogue of the entropy-like regulariser can be found and proved to be strongly convex with respect to the nuclear norm over the nuclear ball, from which the best-known regret upper bound depending on ln(min{m, n}) for problems in R m,n follows.
Furthermore, the algorithms have closed-form solutions for the 1 and nuclear regularised objective functions. For the 2 and Frobenius regularised objectives, the update rules involve values of the principal branch of the Lambert function, which can be well approximated. We propose a sorting based procedure projecting the solution to the decision set for the 1 or nuclear ball constrained problems. Finally, the proposed online algorithms can be converted into algorithms for stochastic optimisation with the technique introduced in Joulani et al. (2020). We show that the converted algorithms guarantee an optimal accelerated convergence rate for smooth objective functions. The convergence rate depends logarithmically on the dimensionality of the problem, which suggests its advantage compared to the accelerated AdaGrad-Style algorithms (Cutkosky, 2019;Joulani et al., 2020;Levy et al., 2018).
The rest of the paper is organised as follows. Section 2 reviews the existing work. Section 3 introduces the notation and preliminary concepts. Next, we present and analyse our algorithms in Section 4. In Section 5, we derive efficient implementations for some popular choices of composite objectives, constraints and stochastic optimisation. Section 6 demonstrates the empirical evaluations using both synthetic and real-world data. Finally, we conclude our work in Section 7.

Related Work
Our primary motivation is to solve the optimisation problems with an elastic net regulariser in their objective function, which are highly involved in attacking (Cancela, Bolón-Canedo, & Alonso-Betanzos, 2021;Carlini & Wagner, 2017;P.-Y. Chen, Sharma, Zhang, Yi, & Hsieh, 2018) and explaining (Dhurandhar et al., 2018;Ribeiro et al., 2016) deep neural networks. The proximal gradient method (PGD) (Nesterov, 2003) and its accelerated variants (Beck & Teboulle, 2009) are usually applied to solving the problem. However, these algorithms are not practical since they require prior knowledge about the smoothness of the objective function to ensure their convergence.
The AdaGrad-style algorithms (Alacaoglu et al., 2020;J. Duchi et al., 2011;Orabona et al., 2015;Orabona & Pál, 2018) have become popular in the machine learning community in recent years. Given the gradient vectors g 1 , . . . , g t received at iteration t, the core idea of these algorithms is to set the stepsizes proportional to 1 √ t−1 s=1 gs 2 * to ensure a regret upper bounded by O( T t=1 g t 2 * ) after T iterations. Online learning algorithms with this adaptive regret can be directly applied to the stochastic optimisation problems (Alacaoglu et al., 2020;Li & Orabona, 2019) or can be converted into a stochastic algorithm (Cesa-Bianchi & Gentile, 2008) with a convergence rate O( 1 √ T ). This rate can be further improved to O( 1 T 2 ) for unconstrained problems with smooth loss functions by applying the acceleration techniques (Cutkosky, 2019;Kavis et al., 2019;Levy et al., 2018). These acceleration techniques do not require prior knowledge about the smoothness of the loss function and a guarantee convergence rate of O( 1 √ T ) for non-smooth functions. Joulani et al. (2020) has proposed a simple approach to accelerate optimistic online optimisation algorithms with adaptive regret bound.
Given a d-dimensional problem, the algorithms mentioned above have a regret upper bound depending (sub-) linearly on d. We are interested in a logarithmic regret dependence on the dimensionality, which can be attained by the EG family algorithms (Arora et al., 2012;Kivinen & Warmuth, 1997;Warmuth, 2007) and their adaptive optimistic extension (Steinhardt & Liang, 2014). However, these algorithms work only for decision sets in the form of cross-polytopes and require prior knowledge about the radius of the decision set for general convex optimisation problems. The p-norm algorithm (Gentile, 2003;Kakade et al., 2012) does not have the limitation mentioned above; however, it still requires prior knowledge about the problem to attain optimal performance (Orabona et al., 2015). The HU algorithm (Ghai et al., 2020), which interpolates gradient descent and EG, can theoretically be applied to loss functions with elastic net regularisers and decision sets other than cross-polytopes. However, it is not practical due to the complex projection step.
Following the idea of HU, we propose more practical algorithms interpolating EG and the p-norm algorithm. The core of our algorithm is a symmetric logarithmic function. Orabona (2013) first introduced the idea of composing the single-dimensional symmetric logarithmic function and a norm to generalise EG to the infinite-dimensional space. It has become popular for parameter-free optimisation (Cutkosky & Boahen, 2017a, 2017bKempka, Kotlowski, & Warmuth, 2019) since one can easily construct an adaptive regulariser with this composition (Cutkosky & Boahen, 2017a). In this paper, instead of using the composition, we apply the symmetric logarithmic function directly to each entry of a vector to construct a symmetric entropy-like function that is strongly convex with respect to the 1 norm. We analyse MD and FTRL with the entropy-like function in the framework developed in Joulani et al. (2017). The analysis of the spectral analogue of the entropy-like function follows the idea proposed in Ghai et al. (2020).

Preliminary
The focus of this paper is OCO with the decision variable taken from a compact convex subset K ⊆ X of finite dimensional vector space equipped with a norm · . Given a sequence of vectors {v t }, we use the compressed-sum notation v 1:t = t s=1 v s for simplicity. We denote by X * the dual space with the dual norm · * . The bi-linear map combining vectors in X * and X is denoted by For X = R d , we denote by · 1 the 1 norm, the dual norm of which is the maximum norm denoted by · ∞ . It is well known that the 2 norm denoted by · 2 is self-dual. In case X is the space of the matrices, for simplicity, we also use · 1 , · 2 and · ∞ for the nuclear, Frobenius and spectral norm, respectively.
Let σ : R m,n → R min{m,n} be the function mapping a matrix to its singular values.
Clearly, the singular value decomposition (SVD) of a matrix X can be expressed as Similarly, we write the eigendecomposition of a symmetric matrix X as where we denote by λ : S d → R d the function mapping a symmetric matrix to its spectrum. Given a convex set K ⊆ X and a convex function f : K → R defined on K, we denote by ∂f (y) = {g ∈ X * |∀y ∈ K.f (x) − f (y) ≥ g, x − y } the subgradient of f at y. We refer to f (y) any element in ∂f (y). A function is η-strongly convex with respect to · over K if holds for all x, y ∈ K and f (y) ∈ ∂f (y).

Algorithms and Analysis
In this section, we present and analyse our algorithms, which begins with a short review on EG and the p-norm algorithm for the case f t = l t . The EG algorithm can be considered as an instance of OMD, the update rules of which is given by where g t ∈ ∂f t (x t ) is the subgradient, and η > 0 is the stepsize. Although the algorithm has the expected logarithmic dependence on the dimensionality, its update rule is applicable only to the decision variables on the standard simplex. For the problem with decision variables taken from an 1 ball {x| x 1 ≤ D}, one can apply the EG ± trick, i.e. use the vector [ D 2 g t , − D 2 g t ] to update [x t+1,+ , x t+1,− ] at iteration t and choose the decision variable x t+1,+ − x t+1,− . However, if the decision set is implicitly given by a regularisation term, the parameter D has to be tuned. Since applying an overestimated D increases regret, while using an underestimated D decreases the freedom of the model, the algorithm is sensitive to tuning. For composite objectives, EG is not practical due to its update rule.
Compared to EG, the p-norm algorithm, the update rule of which is given by x t+1,i = sgn(y t+1,i )|y t+1,i | q−1 y t+1 2 q−1 q , is better applicable for unknown D. To combine the ideas of EG and the p-norm algorithm, we consider the following generalised entropy function In the next lemma, we show the twice differentiability and strict convexity of φ, based on which a strongly convex potential function for OMD in a compact decision set can be constructed.
Lemma 1 φ is twice continuous differentiable and strictly convex with Furthermore, the convex conjugate given by φ * : Since we can expand the natural logarithm as ln( |x| β + 1) = |x| β − |x| 2 2β 2 + |x| 3 3β 3 − . . ., φ(x) can be intuitively considered as an interpolation between the absolute value and square. As observed in Figure 1a, it is closer to the absolute value compared to the hyperbolic entropy introduced in Ghai et al. (2020). Moreover, running OMD with which sets the signs of coordinates like the p-norm algorithm and updates the scale similarly to EG. As illustrated in Figure 1b, the mirror map φ * is close to the mirror map of EG, while the behavior of HU is more similar to the gradient descent update.

Algorithms in the Euclidean Space
To obtain an adaptive and optimistic algorithm, we define the following time varying function and apply it to the adaptive optimistic OMD (AO-OMD) given by for the sequence of subgradients {g t } and hints {h t }. In a bounded domain, φ t is strongly convex with respect to · 1 , which is shown in the next lemma.
Lemma 2 Let K ⊆ R d be convex and bounded such that With the property of the strong convexity, the regret of AO-OMD with regulariser (2) can be analysed in the framework of optimistic algorithm (Joulani et al., 2017) and is upper bounded by the following theorem.
EG can also be considered as an instance of FTRL with a constant stepsize. The update rule of the adaptive optimistic FTRL (AO-FTRL) is given by The regret of AO-FTRL is upper bounded by the following theorem.

Spectral Algorithms
We now consider the setting in which the decision variables are matrices taken from a compact convex set K ⊆ R m,n . A direct attempt to solve this problem is to apply the updating rule (3) or (4) to the vectorised matrices. A regret bound of O(D T ln(mn)) can be guaranteed if the 1 norm of the vectorised matrices from K are bounded by D, which is not optimal. In many applications, elements in K are assumed to have bounded nuclear norm, for which the regulariser can be applied. The next theorem gives the strong convexity of Φ t with respect to · 1 over K, which allows us to use {Φ t } as the potential functions in OMD and FTRL.
Theorem 3 Let σ : R m,n → R d be the function mapping a matrix to its singular values. Then the function Φ t = φ t • σ is αt 2(D+min{m,n}β) -strongly convex with respect to the nuclear norm over the nuclear ball with radius D.

The proof of Theorem 3 follows the idea introduced in Ghai et al. (2020). Define the operator
The set X = {S(X)|∈ R m,n } is a finite dimensional linear subspace of the space of symmetric matrices S m+n . Its dual space X * determined by the Frobenius inner product can be represented by X itself. For any S(X) ∈ X , the set of eigenvalues of S(X) consists of the singular values and the negative singular values of X. Since φ is even, we have Lemma 3 Let f : R → R be twice continuously differentiable. Then the function given by Then for any G, H ∈ S d , we have whereg ij andh ij are the elements of the i-th row and j-th column of the matrix U GU and U HU , respectively.
Lemma 3 implies the unsurprising positive semidefiniteness of D 2 F (X) for convex f . Furthermore, the exact expression of the second differential allows us to show the local smoothness of Φ * t using the local smoothness of φ * . Together with Lemma 4, the locally strong convexity of Φ t | X can be proved.
Lemma 4 can be considered as a generalised version of the local duality of smoothness and convexity proved in Ghai et al. (2020). The required positive definiteness of D 2 Φ * t (θ) is guaranteed by the exact expression of the second differential described in Lemma 3 and the fact φ * (θ) > 0 for all θ ∈ R. Finally, using the construction of X , the locally strong convexity of Φ t | X can be extended to Φ t . The complete proofs of Theorem 3 and the technical lemmata can be found in Appendix B.1.
With the property of the strong convexity, the regret of applying (5) to AO-OMD and AO-FTRL can be upper bounded by the following theorems.
Theorem 4 Let K ⊆ R m,n be a compact convex set. Assume that there is some D > 0 such that x 1 ≤ D holds for all x ∈ K. Let {x t } be the sequence generated by update rule (3) with regulariser (5) at iteration t. Setting β = 1 min{m,n} , η = 1 ln(D+1)+ln min{m,n} , and with c(m, n, D) ∈ O(D ln(D + 1) + ln min{m, n}).
Theorem 5 Let K ⊆ R min{m,n} be a compact convex set with min{m, n} > e. Assume that there is some D ≥ 1 such that x 1 ≤ D holds for all x ∈ K. Let {x t } be the sequence generated by updating rule (4) with time varying regulariser (5).
With regulariser (5), both AO-OMD and AO-FTRL guarantee a regret upper bound proportional to ln min{m, n}, which is the best known dependence on the size of the matrices.

Derived Algorithms
Given z t+1 ∈ X * and a time varying closed convex function R t+1 : K → R, we consider the following updating rule It is easy to verify that (6) is equivalent to Setting z t+1 = − φ t+1 (x 1 ) + g 1:t + h t+1 and R t+1 = r 1:t+1 , we obtain the AO-FTRL update The rest of this section focuses on solving the second line of (6) for some popular choices of r and K.

Elastic Net Regularisation
We first consider the setting of K = R d and R t+1 (x) = γ 1 x 1 + γ2 2 x 2 2 , which has countless applications in machine learning. It is easy to verify that the Bregman divergence associated with ψ t+1 is given by

The minimiser of
in R d can be simply obtained by setting the subgradient to 0. For ln( |yi,t+1| β + 1) ≤ γ1 αt+1 , we set x i,t+1 = 0. Otherwise, the 0 subgradient implies sgn(x i,t+1 ) = sgn(y i,t+1 ) and |x i,t+1 | given by the root of where W 0 is the principal branch of the Lambert function and can be well approximated. For γ 2 = 0, i.e. the 1 regularised problem, |x i,t+1 | has the closed form The implementation is described in Algorithm 1.

Nuclear and Frobenius Regularisation
Similarly, we consider K = R m,n with a regulariser R t+1 (x) = γ 1 x 1 + γ2 2 x 2 2 mixed with the nuclear and Frobenius norm. The second line of update rule (6) can be implemented as follows Let y t+1 andỹ t+1 be as defined in (9). It is easy to verify arg min From the characterisation of subgradient, it follows The subgradient of the objective (10) at x t+1 = U t+1 diag(x t+1 )V t+1 is clearly 0.

Projection onto the Cross-Polytope
Next, we consider the setting where r t is the zero function and K is the 1 ball with radius D. Clearly, we simply set x t+1 = y t+1 for y t+1 1 ≤ D. Otherwise, Algorithm 2 describes a sorting based procedure projecting y t+1 onto the 1 ball with time complexity O(d log d). The correctness of the algorithm is shown in the next lemma.
Algorithm 2 project(y, D, β) Lemma 5 Let y ∈ R d with y 1 > D and x * as returned by Algorithm 2, then we have For the case that K ⊆ R m,n is the nuclear ball with radius D and y t+1 1 > D, we need to solve the problem where the constant part of the Bregman divergence is removed. From the von Neumann's trace inequality, the Frobenius inner product is upper bounded by The equality holds when x and U t+1 φ t+1 (ỹ t+1 )V t+1 share a simultaneous SVD, i.e. the minimiser has an SVD of the form Thus the problem is reduced to which can be solved by Algorithm 2. Thus, the projection of update rule (6) can be implemented as follows

Stochastic Acceleration
Finally, we consider the stochastic optimisation problem of the form where l : X → R and r : K → R ≥0 are closed convex functions. In the stochastic setting, instead of having a direct access to l, we query a stochastic gradient g t of l at z t in each iteration t with E[g t |z t ] ∈ ∂l(z t ). Algorithms with a regret bound of the can be easily converted into a stochastic optimisation algorithm by applying the update rule to the scaled stochastic gradient a t g t and hint a t+1 g t , which is described in Algorithm 3. Joulani et al. (2020) has shown the Algorithm 3 Stochastic Acceleration Input: optimistic algorithm A, compact convex set K and closed convex function Update A with K, α t+1 r, scaled subgradient a t g t and hint a t+1 g t end for Return x t+1 convergence of accelerating Adagrad for the problem in R d . We extend the result to any finite dimensional normed vector space in the following corollary.
Corollary 1 Let (X, · ) be a finite dimensional normed vector space and K ⊆ X a compact convex set. Denote by A be some optimistic algorithm generating x t ∈ K at iteration t. Denote by ν 2 t = E[ g t − l t (z t ) 2 * |z t ] the variance. If A has a regret upper bound in the form of then there is some L > 0 such that the error incurred by Algorithm 3 is upper bounded by for smooth loss function. Applying update rule (3) or (4) with regulariser (2) or (5) to Algorithm 3, the constant c 2 is proportional to √ ln d and ln(min{m, n}) for X = R d and X = R m,n respectively, while the accelerated AdaGrad has a linear dependence on the dimensionality (Joulani et al., 2020).

Experiments
This section shows the empirical evaluation of the developed algorithms. We carry out experiments on both synthetic and real-world data and demonstrate the performances of the OMD (Exp-MD) and FTRL (Exp-FTRL) based on the exponentiated update.

Online Logistic Regression
For a sanity check, we simulate an d-dimensional online logistic regression problem, in which the model parameter w * has a 99% sparsity and the non-zero values are randomly drawn from the uniform distribution over [−1, 1]. At each iteration t, we sample a random feature vector x t from a uniform distribution over [−1, 1] d and generate a label y t ∈ {−1, 1} using a logit model, i.e. Pr[y t = 1] = (1 + exp(−w x t )) −1 . The goal is to minimise the cumulative regret with l t (w) = ln(1 + exp(−y t w x t )). We choose d = 10, 000 and compare our algorithms with AdaGrad, AdaFTRL (J. Duchi et al., 2011) and HU (Ghai et al., 2020). For both AdaGrad and AdaFTRL, we set the i-th entry of the proximal matrix H t to h ii = 10 −6 + t−1 s=1 g 2 s,i as their theory suggested (J. Duchi et al., 2011). The leading to an adaptive regret upper bound. All algorithms take decision variables from an 1 ball {w ∈ R d | w 1 ≤ D}, which is the ideal case for HU. We examine the performances of the algorithms with known, underestimated and overestimated w * 1 by setting D = w * 1 , D = 1 2 w * 1 and D = 2 w * 1 , respectively. For each choice of D, we simulate the online process of each algorithm for 10, 000 iterations and repeat the experiments for 20 trials. Figure 2 plots the curves of the average cumulative regret with the ranges of standard deviation as shaded regions. As can be observed, our algorithms have a clear and stable advantage over the AdaGrad-style algorithms and slightly outperform HU in the experiments with known w * 1 . As the combination of the entropy-like regulariser and FTRL can also be used for parameter-free optimisation (Cutkosky & Boahen, 2017a), overestimating w * 1 does not have a tangible impact on the performance of Exp-FTRL, which leads to its clear advantage over the rest.

Online Multitask Learning
Next, we examine the performance of the developed spectral algorithms using a simulated online multi-task learning problem (Kakade et al., 2012), in which we need to solve k highly correlated d-dimensional online prediction problems simultaneously. The data are generated as follows. We first randomly draw two orthogonal matrices U ∈ GL(d, R) and V ∈ GL(k, R). Then we generate a k-dimensional vector σ with r non-zero values randomly drawn from a uniform distribution over [0,10] and Fig. 3: Online Multitask Learning construct a low rank parameter matrix W * = U diag(σ)V . At each iteration t, k feature and label pairs (x t,1 , y t,1 ), . . . , (x t,k , y t,k ) are generated using k logit models with the i-th parameters taken from the i-th rows of W . The loss function is given by l t (W ) = k i=1 ln(1 + exp(−y t,i w i x t,i )). We set d = 100, k = 25 and r = 5, take the nuclear ball {W ∈ R d,k | W 1 ≤ D} as the decision set and run the experiment as in subsection 6.1. The average and standard deviation of the results over 20 trials are shown in Figure 3.
Similar to the online logistic regression, our algorithms have a clear advantage over AdaGrad and AdaFTRL and slightly outperform HU in all settings. While the regret of the AdaGrad-style algorithms spread over a wider range, our algorithms yield relatively stabler results. The superiority of Exp-FTRL for the overestimated W * 1 can also be observed from figure 3c.

Optimisation for Contrastive Explanations
Generating the contrastive explanation of a machine learning model (Dhurandhar et al., 2018) is the most motivating application of this paper. Given a sample x 0 ∈ X and machine learning model f : X → R K , the contrastive explanation consists of a set of pertinent positive (PP) features and a set of pertinent negative (PN) features, which can be found by solving the following optimisation problem (Dhurandhar et al., 2018) min x∈W l x0 (x) + λ 1 x 1 + λ 2 2 x 2 2 .
Let κ ≥ 0 be a constant and define k 0 = arg max i f (x 0 ) i . The loss function for finding PP is given by which imposes a penalty on the features that do not justify the prediction. PN is the set of features altering the final classification and is modelled by the following loss function In the experiment, we first train a ResNet20 model (He, Zhang, Ren, & Sun, 2016) on the CIFAR-10 dataset (Krizhevsky, 2009), which attains a test accuracy of 91.49%. For each class of the images, we randomly pick 100 correctly classified images from the test dataset and generate PP and PN for them. For PP, we take the set of all feasible images as the decision set, while for PN, we take the set of tensors x, such that x 0 + x is a feasible image.
We first consider the white-box setting, in which we have the access to l x0 . Our goal is to demonstrate the performance of the accelerated AO-OMD and AO-FTRL based on the exponentiated update (AccAOExpMD and AccAOExpFTRL). In Dhurandhar et al. (2018), the fast iterative shrinkage-thresholding algorithm (FISTA) (Beck & Teboulle, 2009) is applied to finding the PP and PN. Therefore, we take FISTA as our baseline. In addition, our algorithms are also compared with the accelerated AO-OMD and AO-FTRL with AdaGrad-style stepsizes (AccAOMD and AccAOFTRL) (Joulani et al., 2020).
We pick λ 1 = λ 2 = 1 2 , which is the largest value from the set {2 −i |i ∈ N} allowing FISTA to attain a negative loss l x0 for 10 randomly selected images. All algorithms start from x 1 = 0. Figure 4 plots the convergence behaviour of the five algorithms, averaged over the 1000 images. In the experiment for PP, our algorithms are obviously better than the AdaGrad-style algorithms. Although FISTA converges faster at the first 100 iterations, it does not make further progress afterwards due to the tiny stepsize found by the backtracking rule. In the experiment for PN, all algorithms behave similarly. It is worth pointing out that the backtracking rule of FISTA requires multiple function evaluations, which are expensive for explaining deep neural networks.
Next, we consider the black-box setting, in which the gradient is estimated through the two-points estimation where δ, µ are constants and v i is a random vector. Following X. Chen et al. (2019), we set δ = d and sample v i independently from the uniform distribution over the unit sphere for AdaGrad-style algorithms. Since the convergence of our algorithms depends on the variance of the gradient estimation in (R d , · ∞ ), we set δ = 1 and sample ν i,1 , . . . , ν i,d independently from Rademacher distribution according to  (2015). To ensure a small bias of the gradient estimation, we set µ = 1 √ dT , which is the recommended value for non-convex and constrained optimisation in X. Chen et al. (2019). The performances of the algorithms are examined in the high and low variance settings with b = 1 and b = √ T , respectively. Since the problem is stochastic, FISTA, which searches for the stepsize at each iteration, is not practical. Thus, we remove it from the comparison. Figure 5 plots the convergence behaviour of the algorithms in the high variance setting. Our algorithms outperform the AdaGrad-style algorithms for generating both PP and PN. Furthermore, the FTRL based algorithms have higher convergence rates than the MD based ones at the first few iterations, leading to overall better performance. The experimental results of the low variance setting are plotted in figure 6. Though AccAO-ExpFTRL yields the smallest objective value at the beginning of the experiments, it gets stuck in the local minimum around 0 and is outperformed by AccAOExpMD and AccAOFTRL at the later iterations. Overall, the algorithms based on the exponentiated update have an advantage over the AdaGrad-style algorithms for both high and low variance settings.

Conclusion
This paper proposes and analyses a family of online optimisation algorithms based on an entropy-like regulariser combined with the ideas of optimism and adaptivity. The proposed algorithms have adaptive regret bounds depending logarithmically on the dimensionality of the problem, can handle popular composite objectives and can be easily converted into stochastic optimisation algorithms with optimal accelerated convergence rates for smooth function. As a future research direction, we plan to analyse the convergence of the proposed algorithms together with variance reduction techniques for non-convex stochastic optimisation and analyse their empirical performance for training deep neural networks.

Declarations Funding
The research leading to these results received funding from the German Federal Ministry for Economic Affairs and Climate Action under Grant Agreement No. 01MK20002C.

Code availability
The implementation of the experiments and all algorithms involved in the experiments are available on GitHub https://github.com/mrdexteritas/exp grad.

Availability of data and materials
The source code generating synthetic data, creating neural networks and model training are available on GitHub https://github.com/mrdexteritas/exp grad. The CIFAR-10 data are collected from https://www.cs.toronto.edu/ ∼ kriz/cifar.html.

Conflicts of Interests and Competing Interests
The authors declare that they have no conflicts of interests or competing interests.

Ethics Approval
Not Applicable.

Consent to Participate
Not Applicable

Consent for Publication
Not Applicable Appendix A Missing Proofs of Section 3.1
For any h ∈ R, we have where the first inequality uses the fact ln x ≤ x − 1. Further more, we have where the first inequality uses the fact ln x ≥ 1 − 1 x . Thus, we have for h < 0, from which it follows lim h→0 Let h = 0. Then we have From the inequalities of the logarithm, it follows Thus, we obtain φ (0) = α β . By the definition of the convex conjugate we have which is differentiable. The maximiser y satisfies ln( |y| β + 1) sgn(y) = θ.

A.2 Proof of Lemma 2
Proof Let x ∈ K be arbitrary. We have where the first inequality follows from the Cauchy-Schwarz inequality. This leads clearly to the strong convexity for a twice differentiable function.

A.3 Proof of Theorem 1
Proposition 1 Let K ⊆ X be a convex set. Assume that r t : K → R ≥0 is closed convex function defined on K and ψ t : K → R is η t -strongly convex w.r.t. · over K. Then the sequence {x t } generated by (3) with regulariser {ψ t } guarantees Proof From the optimality condition, it follows that for all x ∈ K Then, we have Adding up from 1 to T , we obtain h 1 , h T +1 and x T +1 , which are artifacts of the analysis, can be set to 0. Then, we simply obtain Since r T +1 is not involved in the regret, we assume without loss of generality r 1 = r T +1 . From the η t -strong convexity of φ t we have where the second inequality uses the definition of dual norm, the third inequality follows from the fact ab ≤ a 2 2 + b 2 2 . The claimed the result follows.
Proof of Theorem 1 Proposition 1 can be directly applied, and we obtain Using Lemma 8, we bound the first term of (A3) Using Lemma 6, the second term of (A3) can be bounded as The third term of (A3) is simply 0 since we set α 1 = 0. Setting η = 1 ln(D+1)+ln d and combining the inequalities above, we obtain the claimed result.

A.4 Proof of Theorem 2
Proposition 2 Let K ⊆ X be a compact convex set such that x ≤ D holds for all x ∈ K, r t : K → R ≥0 and φ t : K → R closed convex function defined on K. Assume φ t is η t -strongly convex w.r.t. · over K and φ t ≤ φ t+1 for all t = 1, . . . , T . Then the sequence {x t } generated by (4) with guarantees Proof of Proposition 2 First, define ψ t = r 1:t + φ t . Then, we have Setting the artifacts h T +1 to 0, rearranging and adding T t=1 g t , w t to both sides, we obtain From the definition of ψ t , it follows where we assumed r T +1 ≡ 0, since it is not involved in the regret. Furthermore, we have for t ≥ 1 where the first inequality uses the definition of convex conjugate and the second inequality follows from the fact φ t+1 ≤ φ t . Adding up from 1 to T , we obtain where we use r T +1 ≡ 0 and h T +1 = 0. Combining the inequality above and rearranging, we have Next, by the definition of the Bregman divergence, we have Putting (A6) and (A7) together, we have Combining the inequalities above, we obtain Proof of Theorem 2 We take the Bregman divergence B φt (x, x 1 ) as the regulariser at iteration t. Since B φt (x, x 1 ) is non-negative, increasing with t and 2αt D+βd strongly-convex w.r.t. · 1 , Proposition 2 can be directly applied, and we get where the inequality uses the assumptions D ≥ 1 and d > e. Adding up from 1 to T , we obtain The first term can be bounded by Lemma 8 Combining the inequality above, we obtain with c(D, d) ∈ O(D ln(D + 1) + ln d), which is the claimed result.
Appendix B Missing Proofs of Section 3.2

B.1 Proof of Theorem 3
The Proof of Theorem 3 is based on the idea of Ghai et al. (2020). We first revise some technical lemmata.
Proof of Lemma 3 DefineF : S d → S d , X → U diag(f (λ 1 (X)), . . . , f (λ d (X)))U . Apparently, we have F (X) = TrF (X). From the Theorem V.3.3 in Bhatia (2013), it follows thatF is differentiable and Using the linearity of the trace and the chain rule, F is differentiable and the directional derivative at X in H is given by whereh ii is the i-th element in the diagonal of the matrix U HU . Next, definē . . , f (λ d (X)))U .

And we have DF (X) = H → Tr(F (X)H)
Applying Theorem V.3.3 in Bhatia (2013) again, we obtain the differentiability ofF and Note that X → Tr(X(·)) is a linear map between finite dimensional spaces. Thus F is twice differentiable. From the linearity of the trace operator and matrix multiplication, it follows that D H F (X) is differentiable. Applying the chain rule, we obtain which is the claimed result.
Proof of Lemma 4 Since D 2 Φ * (θ) ∈ L(X * , L(X * , R)) is positive definite and X is finite dimensional, the map Thus, we obtain the convex conjugate ψ * for θ = DΦ(v) and all x ∈ X. Thus, we have f −1 θ = D 2 Φ(DΦ * (θ)) and Finally, since ψ θ (v) ≤ 1 2 v 2 * holds for all v ∈ X * , we can reverse the order by applying Proposition 2.19 in Barbu and Precupanu (2012) and obtain for all which is the claimed result.
Finally, we can prove Theorem 3.
Proof of Theorem 3 We start the proof by introducing the required definitions. Define the operator S : R m,n → S m+n , X → 0 X X 0 The set X = {S(X)|X ∈ R m,n } is a finite dimensional linear subspace of the space of symmetric matrices S m+n , and thus (X , · 1 ) is a finite dimensional Banach space. Its dual space X * determined by the Frobenius inner product can be represented by X itself. Denote by B(D) = {X ∈ R m,n | X 1 ≤ D} the nuclear ball with radius D. Then the set K = {S(X)|X ∈ B(D)} is a nuclear ball in X with radius 2D, since S(X) 1 = 2 X 1 for all X ∈ R m,n . Let S(X) ∈ K be arbitrary. Denote by F t = Φ t | X the restriction of Φ t to X . Next, we show the strong convexity of F t over K. From the conjugacy formula of Theorem 2.4 in Lewis (1995) and Lemma 1, it follows F * where the second equality follows from the fact that Φ * t is absolutely symmetric. By Lemma 1 and Lemma 3, F * t is twice differentiable. Let X ∈ K be arbitrary and Θ = DF t (X) ∈ X * . For simplicity, we define Then, for all H ∈ X , D 2 F * t (Θ) is clearly positive definite over S m+n , since γ(f t , Θ) ij > 0 for all i and j. Furthermore, from the mean value theorem and the convexity of f t , there is a c ij ∈ (0, 1) such holds for all λ i (Θ) = λ j (Θ). Thus, we obtain where the last line uses von Neumann's trace inequality and the fact that the rank of H ∈ X and Θ is at most 2 min{m, n}. Since H 2 is positive semi-definite, σ i (H 2 ) = σ i (H) 2 holds for all i. Furthermore, f t (x) ≥ 0 holds for all x ∈ R. Thus, the last line of (B8) can be rewritten into Recall Θ = DF t (S(X)) for S(X) ∈ K. Together with Lemma 1, we obtain By the construction of K, it is clear that |λ i (S(X))| ≤ 2D. Thus, (B9) can be simply further upper bounded by Finally, applying Lemma 4, we obtain which implies the αt 4(D+min{m,n}β) -strong convexity of F t over K. Finally, we prove the strongly convexity of Φ t over B(D) ∈ R m+n . Let X, Y ∈ B(D) be arbitrary matrices in the nuclear ball. The following inequality can be obtained which implies the αt 2(D+min{m,n}β) -strong convexity of Φ t as desired.

B.2 Proof of Theorem 4
Proof The proof is almost identical to the proof of Theorem 1. From the strong convexity of Φ t shown in Theorem 3 and the general upper bound in Proposition 1, we obtain Using Lemma 8, we have ≤4D(ln(D + 1) + ln min{m, n}) The claimed result is obtained by combining the inequalities above.

B.3 Proof of Theorem 5
Proof Since B Φt (x, x 1 ) is non-negative, increasing and 2αt D+βd strongly-convex w.r.t. · 1 , Proposition 2 can be directly applied, and we get Setting β = 1 min{m,n} and η = 1 √ ln(D+1)+ln min{m,n} , we have where the inequality uses the assumptions D ≥ 1 and min{m, n} > e. Adding up from 1 to T , we obtain R 1:T ≤B Φt (x, x 1 ) + 2D ln(D + 1) + ln min{m, n} The first term can be bounded by Lemma 8 Combining the inequalities above, we obtain with c(D, m, n) ∈ O(D ln(D + 1) + ln min{m, n}), which is the claimed result.
Appendix C Missing Proofs of Section 3.4 C.1 Proof of Lemma 5 Proof of Lemma 5 Let x * be the minimiser of B ψt+1 (x, y t+1 ) in K. Using the the fact ln a ≥ 1 − 1 a , we obtain ln( Thus, y i = 0 implies x * i = 0. Furthermore sgn(x * i ) = sgn(y i ) must hold for all i with y i = 0, since otherwise we can always flip the sign of x * i to obtain smaller objective value. So we assume without loss of generality that y i ≥ 0. We claim that d i=1 x * i = D holds for the minimiser x * . If it is not the case, there must be some i with x * i < y i , and increasing x * i by a small enough amount can decrease the objective function. Thus minimising the Bregman divergence can be rewritten into Using Lagrange multipliers for x ∈ R d , λ ∈ R and ν ∈ R d x i ).
Setting ∂L ∂xi = 0, we obtain ln From the complementary slackness, we have ν i = 0 for x i = 0, which implies where z = exp(λ). Let x * be the minimiser and I = {i : x * i > 0} the support of x * . Then we have D + |I|β = 1 z ( i∈I y i + |I|β).
Let p be a permutation of {1, . . . , d} such that y p(i) ≤ y p(i+1) . Define It follows from is not in the support I, since otherwise it would imply x * p(j) ≤ 0. Thus the minimisation problem (C11) is equivalent to Define function R : R >0 → R, x → x ln x. It can be verified that R is convex. The objective function in (C12) can be further rewritten into where the inequality follows from the Jensen's inequality. The minimum is attained if and only if x p(i) +β y p(i) +β are equal for all i. This is only possible when σ(i) is in the support I for all i ≥ ρ. Thus we can set z =

C.2 Proof of Corollary 1
Proposition 3 Let {x t } be any sequences and {y t } be the sequence produced by y t+1 = at a1:t x t + (1 − at a1:t )y t . Choosing a t > 0 , we have, for all x ∈ W a 1: (a 1:t−1 B l (y t , y t+1 )), with R 1:T = T t=1 a t ( g t , x t − x + r(x t ) − r(x)).
Proof It is interesting to see that the average scheme can be considered as an instance of the linear coupling introduced in Allen-Zhu and Orecchia (2017). For any sequence {x t }, {y t } and z t = at a1:t x t + (1 − at a1:t )y t , we start the proof by bounding a t (f (y t+1 ) − f (x)) as follows a t (l(y t+1 ) − l(x)) =a t (l(y t+1 ) − l(z t ) + l(z t ) − l(x)) =a t (l(y t+1 ) − l(z t ) + l(z t ), z t − x − B l (z t , x)) =a t (l(y t+1 ) − l(z t ) + l(z t ), z t − x t + l(z t ), x t − x − B l (z t , x)) (C13) Denote by τ t = at a1:t the weight. The first term of the the inequality above can be further bounded by =a t ( 1 τ t − 1)(l(y t ) − l(y t+1 )) + a t τ t (l(y t+1 ) − l(z t )) − a 1:t−1 B l (y t , z t ). (C14) Next, we have Combining (C13), (C14) and (C15), we have a 1:T (f (y T +1 ) − f (x)) = T t=1 a t τ t (l(y t+1 ) − l(z t )) (a 1:t−1 B l (y t , z t ) − a t B l (z t , x)), Simply setting y t+1 := z t makes the first term above 0 and implies z t = t s=1 asxs a1:t . Furthermore it follows from the convexity of r r(y T +1 ) = r( T s=1 asxs a 1:T ) ≤ T t=1 a t r(x t ) a 1:T .

Appendix D Technical Lemmata
Lemma 6 For positive values a 1 , . . . , an the following holds: Proof The proof of (1) can be found in Lemma A.2 in Levy et al. (2018) For (2), we define A 0 = 1 and A i = i k=1 a i + 1 for i > 0. Then we have where the inequality follows from the concavity of log.
Lemma 7 Let l be convex and M -smooth over X, i.e.
Proof Let x, y ∈ X be arbitrary. Define h : X → R, z → l(z) − l(y), z . Clearly, h is M -smooth and minimised at y. Thus we have where the first inequality uses the M -smoothness of h, and the second uses h(x), z − x ≥ − h(x) * z − x , for which we choose z such that the equality holds. This implies 1 2M l(x) − l(y) 2 * ≤l(x) − l(y) − l(y), x − y = B l (x, y), and the desired result follows.