Switched diffusion processes for non-convex optimization and saddle points search

We introduce and investigate stochastic processes designed to find local minimizers and saddle points of non-convex functions, exploring the landscape more efficiently than the standard noisy gradient descent. The processes switch between two behaviours, a noisy gradient descent and a noisy saddle point search. It is proven to be well-defined and to converge to a stationary distribution in the long time. Numerical experiments are provided on low-dimensional toy models and for Lennard–Jones clusters.


Introduction 1.Overview
This work addresses the issue of finding the local minimizers and the saddle points of a given non-convex potential U : R d → R + .These questions are ubiquituous in a wide range of scientific fields.In particular, in molecular dynamics, U being the energy of a molecular system, the minimizers are the metastable state of the system, while the saddle points are the transition states between the former.As we will see, we will design a stochastic algorithm which targets both local minimizers and saddle points, in such a way that each of this task benefits from the other.In particular, the algorithm is suitable even as an optimisation tool, when the goal is only to find the global minimizers of U .
In high dimension, this problem is challenging, as an exhaustive search is impossible and running local search algorithms started from random initial conditions only works when it is possible to sample starting points that have a reasonable probability to hit the basin of attraction of the points of interest (for a gradient descent or a saddle point search), which is often not the case when these basins are concentrated around an unknown manifold of dimension much smaller than d.
A classical solution in order to visit the whole space while taking into account the energy landscape (hence focusing on low-energy areas) is to follow the overdamped Langevin diffusion process, which is the Markov process solving where B is a d-dimensional Brownian motion and ε > 0 is a temperature parameter.When ε is small, this is a noisy perturbation of a deterministic gradient descent (contrary to the 1 arXiv:2303.13160v1[math.PR] 23 Mar 2023 stochastic gradient descent algorithm, here the noise is voluntarily added, it has a constant intensity and does not come from a Monte Carlo approximation of ∇U ).Under standard growth conditions on U at infinity, the process is ergodic with respect to the measure µ ε (dx) = e −U (x)/ε dx R d e −U (y)/ε dy , i.e. the time spent in any given domain is asymptotically proportional to its probability with respect to µ ε .Hence, the temperature ε has to be taken small enough to counteract the entropic effect due to the high dimension d and ensure that the vicinity of minimizers have a high probability.Then, one can use the point of the trajectory with the lowest energy as a starting point for a local gradient descent (or a variant, like the stochastic gradient descent).However, as ε vanishes, it is well known that (1) is metastable, since transitions between different potential wells occur in a time of order roughly e c/ε for some c > 0, due to the energy barriers induced by low-probability regions interspersing high-probability ones.This makes the convergence of the law of the process towards its equilibrium, and thus the exploration, very slow, which makes this method fail in practice.
Besides, according to large deviations theory, more precisely to the Freidlin-Wentzell results on the low noise behaviour of diffusion processes, it is known that, as ε vanishes, the process (1) is likely to exit from a potential well through the saddle point with the lowest energy level on the boundary of the well.Instead of waiting for an unlikely deviation of the Brownian motion to lead the process (1) to this saddle point, this naturally motivates the definition of a process which actively looks for it.More precisely, the main object of this work is a process which switches randomly between two dynamics, the noisy gradient descent (1) and a noisy saddle points search.
The rest of this work is organized as follows.In the remainder of the introduction, we present two saddle point search algorithm in Section 1.2, and we define the stochastic processes we are going to investigate.In Section 2, we conduct a theoretical study of the processes, and in Section 3 a numerical study.

Deterministic saddle point algorithms
Since saddle point search is less standard than optimization, let us first present some existing deterministic algorithms on this topic.A first class of methods, based on reaction paths, requires the knowledge of two local minimizers, and then finds a path of minimal elevation between them, which passes through a saddle point see e.g.[18,11].This doesn't correspond to our context, where the local minimizers are unknown.A second class of methods, considered e.g. in [12,26,14,24,29,4,15] and references within and which will be the one of interest for us, relies on local walkers, analogous of the gradient descent for optimization, i.e. solutions of some ODEs for which saddle points of U are stable equilibrium.Given some x ∈ R d , write: for the ordered eigenvalues of ∇ 2 U (x), and suppose that we are given v 1 (x), . . ., v d (x) ∈ R d such that v i (x) is an eigenvector of ∇ 2 U (x) associated to λ i (x) (in other words, when the eigenvalues are not all simple, we assume that we have an arbitrary rule to select a given eigenbasis, for instance we take the basis obtained as the limit of the Jacobi algorithm).The idealised saddle dynamic (ISD) is the solution of In other words, x t follows a gradient descent, except in the direction v 1 (x t ) where a reflection is performed, i.e. the process follows a gradient ascent in this direction.Notice that in general x → v 1 (x) is not continuous so the existence of this process is unclear and may be restricted to some parts of the space or to a finite time interval.
A variation of the ISD is the so-called Gentlest Ascent Dynamic (GAD) where η > 0 is a fixed parameter.If ∇ 2 U (x t ) were fixed in the second equation, this would be a gradient descent on S d−1 for the Rayleigh quotient v → −v T ∇ 2 U (x t )v.As η vanishes, formally, we recover the ISD.Contrary to the later, the GAD is always well-defined for all initial conditions and all times.
The following results on the ISD and GAD are from [24, Theorems 2 and 3] and [12].
• Any critical point of U is an equilibrium points for the ISD.It is stable if and only if it is a index-1 saddle point.
x a critical point of U and v an eigenvalue of ∇ 2 U is an equilibrium point of the Gentlest Ascent dynamic.It is stable if and only if x is a index-1 saddle point and v is associated to the negative eigenvalue.
• If Ω ⊂ R d satisfies that for all x ∈ Ω, we have λ 1 (x) < 0 < λ 2 (x), then there exists a unique saddle point z * in Ω, and for all x 0 ∈ Ω, there exists a unique solution to the ISD which converges exponentially fast towards z * .
The convergence results are only local, and one cannot expect more in general since the ISD may have attracting singularities [24].In the case of the GAD, with a potential of the form d i=1 U i (x i ), and an initial condition v 0 ∈ vect(e 1 , ...e i ), v t will stay orthogonal to e j , j > i, and hence the process will never get close to an equilibrium point with an eigenvector not in vect(e 1 , ...e i ).As in the optimization case where the overdamped Langevin diffusion (1) allows for a global exploration while the gradient descent only converges locally, it is thus natural to add a small Brownian noise to the deterministic ISD and GAD.The resulting processes will explore eventually the whole space but, thanks to Proposition 1, will still be attracted locally by saddle points (notice that the Freidlin-Wentzell theory on random perturbation of ODE does not apply directly in the ISD case because of the discontinuity of x → v 1 (x)).

The stochastic processes
For simplicity, from now on, we will restrict ourselves to the case of a function defined on compact manifold (specifically the periodic torus).Indeed, in practical situation, local minima and saddle points are located in a compact set.Most of the definitions and arguments below would be easily adapted to R d provided suitable modifications outside a given compact, in particular to ensure the stability of the processes.
Write T d for the d-dimensional torus and fix some function U ∈ C 2 (T d , R + ).As in the previous section, we write λ i (x) the ordered eigenvalues of ∇ 2 U (x), and we consider functions x → v i (x) such that v i (x) is an eigenvector of ∇ 2 U (x) associated to λ i (x) for all x ∈ T d and i ∈ 1, d .Moreover, we assume that the functions v i are measurable for all i ∈ 1, d , which is indeed possible: Lemma 2. For all i ∈ 1, d there exists a measurable function v i : T d → S d−1 such that v i (x) is and eigenvector of ∇ 2 U (x) associated to λ i (x) for all x ∈ T d .
Proof.Given a symmetric matrix H and (e 1 , ..., e d ) the canonical basis of R d , the function H → (u 1 , µ 1 , ..., u d , µ d ), where u i is the limit of the gradient descent for the Rayleigh quotient starting from e i and µ i is the associated eigenvalue, is measurable.At least one of the µ i is the smallest eigenvalue of H, hence we may just select the smallest indices i 0 such that it is the case and consider the associated vector.We can then iterate by applying the same procedure to (I − u i 0 u T i 0 )H(I − u i 0 u T i 0 ) + (|H| + 1)u i 0 u T i 0 , and similarly by induction.We conclude with the fact that x → ∇ 2 U (x) is continuous, hence measurable.
As in R d , for ε > 0 the overdamped Langevin process is defined on T d as the solution of (1).It is ergodic with respect to µ ε the probability measure with density proportional to e −U/ε on the torus.Similarly, we define the noisy ISD as the solution on which, contrary to its deterministic counterpart, is always well-defined (see Proposition 3 below).Concerning the noisy version of GAD, we may wonder whether it is necessary to add noise to both coordinates, and thus in general we can consider the noisy GAD as the solution on where η > 0, ε 0 and B, B are independent d-dimensional Brownian motions.The second line is such that if ∇ 2 U (X t ) is constant then this is an overdamped Langevin diffusion on the sphere with potential given by the Rayleigh quotient (see e.g.[22,Section 3.2.3]or [1]).In particular, according to [22, Lemma 3.18], V t ∈ S d−1 for all t 0. In fact, by contrast with the deterministic case, since the noisy ISD is well defined, we will not insist much on the noisy GAD, and simply discuss the case ε = 0 in Section 2.1.
Next, we define the idealised switched process (ISP) as the Markov process (X t , I t ) t 0 on T d × {0, 1} where (I t ) t 0 is a Poisson process on {0, 1} with jump rate ν > 0 and X solves with (B t ) t 0 a Brownian motion independent from (I t ) t 0 and, for all x ∈ T d , In other words, the process alternates between the overdamped Langevin dynamics and the noisy ISD, the time being two switching events being independent and distributed according to an exponential law of parameter ν.Likewise, we can define the gentle switched process (GSP) as the Markov process (X t , V t , I t ) t 0 on T d × S d−1 × {0, 1} where where (I t ) t 0 is as in the ISP, independent from B, B , and Finally, let us introduce slight variations of the processes defined above.Consider which is the set of singularities of the deterministic ISD flow.As pointed out in [24], some points of S may be attracting singularities for this flow, and thus they will also locally attract the noisy ISD when ε is small, or the noisy GAD when ε and η are small.This behaviour may be mitigated as follows.Fix some non-decreasing f : R + → [1,2] such that f (0) = 1 and f (r) = 2 for all r larger than some small threshold r * > 0.Then, setting a(x) = f (λ 2 (x) − λ 1 (x)) the drift H 1 in the noisy ISD may be replaced by In other words, when λ 1 (x) and λ 2 (x) are clearly distinguished, then we recover the previous noisy ISD, however, on S, rather than undergoing an orthogonal reflection with respect to v 1 (x), ∇U (x) is orthogonally projected on the orthogonal of the space spanned by v 1 (x), v 2 (x), which means that, in these two directions, the process behave like a Brownian motion.Natural choices for f would be piecewise constant (with f (r) = 1 for r < r * ) or piecewise linear (with f (r) = 1 + r/r * for r < r * ).In the second case, the drift H1 is continuous.
An analogous modification can be applied to the drift H 1 of the noisy GAD, in which case the process is then ( (we refer to [22, Section 3.2.3] to derive the case ε > 0, where the constraints are now that ).This variation can be generalised to any degree of degeneracy of the smallest eigenvalue, i.e. for k 2 modifying the drift H 1 in the vicinity of the set where λ 1 (x) = • • • = λ k (x) (generically these sets are empty for k > 2, but in many practical cases they are not, due to some symmetries in the problem).

Related works
The overdamped and underdamped Langevin process have been widely studied for decades.For an introduction in the context of stochastic algorithm, see [22,23].In an optimisation rather than sampling context, it is also possible to use a temperature ε that varies with time.For example, the simulated annealing uses a temperature that goes to 0, so that the process converges in probability towards the minima of U , see the classical [19] or the more recent [21] for more references.
Concerning saddle point search algorithms, we refer to the works already mentioned in Section 1.2, namely [18,11] for minimal path between two known local minima and [12,26,14,24,29,4,15] for eigenvector-following type algorithms like ISD or GAD.One could also see a saddle point as a minimum of the function x → |∇U (x)| 2 or variants, which is done in [10,6].
Although the use of switched diffusion processes for stochastic optimization algorithms and the specific processes introduced in Section 1.3 are new, there are already many works on switched (also called Markov-modulated) diffusion processes, usually in contexts and with motivations rather different from ours.In the case of an Ornstein-Uhlenbeck process with switched parameter, it is possible to say much about the long time behaviour of the process, see [7,3,25].In a more general case, the fast switching and small temperature regime has been studied in [16,20,9].Switched process are also used for scientific modelling, see [28] for an example in finance.

Theoretical analysis
Before turning to numerical experiment, we are interested in the existence of the different processes, and some of their properties, mainly in the long time limit.For the well-known Langevin process, we refer to [22,23].
A valuable tool to study long time behavior of a process is the Doeblin condition.A Markov kernel P : T d → M 1 (T d ) is said to satisfy a Doeblin condition if there exist c > 0 and a probability measure µ such that for all x ∈ T d , P(x) cµ.This implies that (P n ) n converges in total variation to the unique probability measure µ ∞ such that µ ∞ P = µ ∞ , see for example [17].In the continuous-time setting, this can be written as follow: there exists t 0 , c > 0 and a probability measure µ, such that δ x P t 0 cµ for all x ∈ T d , with P t the semi-group of the process, and δ x P t 0 is hence the law of the process starting from x a time t 0 .

Noisy ISD and Noisy GAD
Since we work on the compact torus and the drift of the noisy ISD ( 2) is measurable and bounded, the well-posedness and long-time behavior follows from classical results on diffusion processes.We summarize these results on the next proposition.Apart from the continuoustime equation (2), we are also interested in the corresponding Euler-Maruyama scheme.The noisy ISD is covered by the following: Proposition 3. Consider σ > 0, b : T d → R d measurable and bounded, and X solution to For any initial condition x 0 ∈ T d , and Brownian motion B, there exists a unique Markov process X x 0 solution to equation (6), with Brownian motion B and initial condition x 0 .Moreover, this process admits a unique invariant probability measure which admits a density with respect to the Lebesgue measure, and the law of the process converges exponentially fast towards this stationary measure in the total variation distance.
The Euler-Maruyama scheme defined for δ > 0 by with (G n ) n a sequence of standard Gaussian variables, admits as well a unique stationary measure µ δ , and there exists c, C, δ 0 > 0 independent from δ such that for all 0 < δ < δ 0 , Proof.The existence of a unique solution X x 0 to Equation ( 6) with initial condition x 0 comes from [31, Corrolary 7.1.7and 8. 1.7].By Itô's formula, the law M x 0 of (X x 0 t ) t∈[0,T ] (a probability measure on C([0, T ], T d )) is a measure solution to the Kolmogorov equation Thus, according to [5, Proposition 6.5.1],Law(X x 0 t ) (the time-marginal of M x 0 ) admits a density h x 0 (t, •) with respect to the Lebesgue measure, for all t > 0, and for any compact interval where • H p,1 is the classical Sobolev norm, for some p > d + 2. Since we are in a compact set, h x 0 ∈ H 2,1 , and [5, Proposition 6.2.7] yields that h x 0 satisfies an Harnack inequality for t great enough: there exists C, τ > 0 such that for all x 0 ∈ T d and t great enough: sup where Leb stands for the Lebesgue measure.In particular, there exists t 0 > 0 such that: inf This is a Doeblin condition for the law of the process, with reference measure the Lebesgue measure, and in particular it classically implies the existence of a unique stationary measure for the process (2) and the exponential convergence of the law at time t toward this equilibrium.
For the Euler-Maruyama scheme, fix n = t/δ , and write: We have a measurable set, we are looking for a lower bound on P(X x 0 n ∈ A).Let Xx 0 n be the Euler scheme seen on R d rather than T d (considering b as a periodic function), with initial condition given as the representant of x 0 in [0, 1) d and let Ā be the intersection of [0, 1) d with the pre-image of A by the periodic quotient R d → T d .Then, P(X x 0 n ∈ A) P( Xx 0 n ∈ Ā), and In other words, the n-step transition kernel of the Euler scheme satisfies a Doeblin condition with constant independent from the step size δ.This implies the existence of a unique stationary measure µ δ such that for all k ∈ N, This gives for all n which concludes the proof.
We now turn to the analysis of the noisy GAD (3).When ε > 0, we get an elliptic diffusion with bounded drift on a smooth compact manifold with no boundary, so essentially the adaptation of Proposition 3 in this more general context apply.In the following we focus on the case ε = 0. Indeed, one may think that it is only necessary to have noise on the position in order to visit the whole space, and then the auxiliary vector V will go to the associated eigenvectors.The question is whether the process is hypoelliptic and controllable.However, it is clear that it cannot be the case in general, simply by considering counter-examples of the form Indeed, in these cases, if e.g.v 0 = (1, 0, ..., 0), then v t = v 0 for all t 0, hence there are no hope of having a unique stationary measure, and convergences towards it for all initial conditions.Let us discuss some some additional conditions that ensure this result.First, notice that if (X t , V t ) t 0 is a noisy GAD then so is (X t , −V t ) t 0 .In fact the relevant variable to define the process is not V t but the class {V t , −V t } on the projective sphere P d = S d−1 /R with the equivalence relation given by vRu iff u = −v or u = v, and we identify the process with its projection on T d × P d (which is still a Markov process, solution of the same SDE (3) but where the drift of V t is seen as a vector field on P d ).
Proposition 4. If U is C 3 , then the noisy GAD is well defined.If in addition, we suppose that the potential U satisfies the following: • There exists (x, ṽ) ∈ T d × P d such that: • For all v ∈ P d , there exists H in the convex hull of the set ∇ 2 U (x), x ∈ T d such that, denoting by ( λi ) the ordered eigenvalues of H and (ṽ i ) the associated eigenvectors, v = ṽ1 , λ1 < λ2 .
Then the noisy GAD admits a unique stationary measure µ GAD on T d ×P d with a positive density with respect to the Lebesgue measure.Moreover, for any initial condition, the law at time t of the process converges exponentially fast to µ GAD in total variation.
Remark 1.The second condition is implied by: For all v ∈ P d , there exists Proof.If U is C 3 , then the noisy GAD is an SDE with Lipschitz coefficient, hence its wellposedness.The first additional assumption ensures that the generator of the noisy GAD satisfies a weak Hörmander condition at (x, ṽ), see [30].
Next, we want to establish the controllability of the process, in the following sense: fix (x ini , v ini ), (x f , v f ) ∈ T d × P d , δ > 0. We want to show that there exists T > 0 and a control u ∈ C 0 ([0, T ], R d ) such that the solution of with (x(0), v(0)) = (x ini , v ini ), satisfies (x(1), v(1)) − (x f , v f ) δ.We first suppose that v ini and v f are not orthogonal.Fix H such that the second condition of Proposition 4 is satisfied, and x 1 , . . ., x p , and α 1 , . . ., α p such that Fix T > 0, n ∈ N, and γ > 0 such that T = n(γ + γ 2 + 2γ 4 ) + γ 2 + γ 4 .Write β i = α 1 + ... + α i , and for k ∈ 0, n : and u is linear on the remaining intervals of size γ 4 .Write as well As γ goes to 0, v converges towards the solution to For clarity, we only show the second point which is the more complex one.We may write for all 0 u T : Now, all it remains to do is to fix T > 0 great enough so that w(T ) is in the δ-neighborhood of v f , then γ small enough so that v(T ) is as well, and n accordingly.
If v ini and v f are orthogonal, writing π for the canonical projection from S d−1 to P d , there exists an intermediate v inter which is not orthogonal to neither v ini or v f , and the same control allows to go from (x ini , v ini ) to (x ini , v inter ) in a time T , and from (x ini , v inter ) to (x f , v f ) in another time T .
We conclude with [30,Theorem 2.1] for the existence of the stationary measure and the convergence, and with [2, Theorem 5.2] for the positive density.

Switched processes
When the mode (I t ) t 0 is an autonomous Markov chain on a finite set 0, m − 1 , m ∈ N (i.e., as it is the case in the processes we have considered in Section 1.3, if its jump rate does not depend on X) then the well-posedness of switched processes of the form with some drifts H 0 , . . ., H m−1 immediately follows from the well-posedness of the corresponding diffusion processes for each i ∈ 0, m − 1 , since the process is then simply defined by induction along the jump times of I.In particular, in view of the previous section, the ISP ( 4) is well defined.Moreover, in general, it is sufficient that one of the diffusion processes satisfies a Doeblin condition to imply the same for the switched process: Proposition 5.If I is irreducible and there exists i 0 such that the diffusion (8) with I t = i 0 satisfies a Doeblin condition (see the introduction of Section 2), then the switched process admits a unique invariant probability measure, and the law of the switched process converges exponentially fast towards this stationary measure.
Proof.The existence results from the previous construction.Fix t 0 > 0 such that for all t > t 0 , where c > 0 and X is a solution to equation (8) with I 0 = i 0 .Since I is an irreducible Markov process, we have for t > 0 Hence we have: which is a Doeblin condition with reference measure l ⊗ δ i 0 , and this concludes the proof.Corollary 6.The ISP is well-defined, admit stationary measure, and its laws converge as t goes to infinity exponentially fast towards its unique stationary measure in the total variation distance.

Numerical experiments
The main goal of the present work is to gain some empirical insights on the qualitative behavior of the processes introduced in Section 1.3 on simple models.Some questions that we have in particular are the following: what happens if the direction of the gentlest ascend starting from a local minimum does not correspond to the direction the process has to take eventually to find a saddle point?How does the process behave in front of attractive singularities of the ISD flow?How does the efficiency of the process (in terms of exploration) depend on the switching rate?How does it compare to a basic overdamped Langevin process?
We will consider three models.The first one is a mixture of Gaussian, for which we can easily chose where are the minima and the saddle point and what are the gentlest ascend directions at the minima.The second one is the function studied in [24], for which the unique saddle point is separated from the two minima by singular lines.Finally, the third example, in higher dimension and closer to a genuine application, is the Lennard-Johns cluster model with 7 particles.
Contrary to the theoretical analysis, most processes here live on R d , which may raise a stability issue.This can be seen in dimension 1, where the process switch between a gradient descent and a gradient ascent, and thus can be non recurrent, or even explosive e.g. for potential of the form |x| 4 at infinity.This can be solved by a suitable modification of the dynamics outside some compact set, for instance replacing H 1 in (4) by H1 (x) = H 1 (x)1 |x| R + H 0 (x)1 |x|>R or a smooth interpolation, or adding a deterministic jump from I t = 1 to I t = 0 when |X t | R.Then, classical Lyapunov conditions on U implies that the process remains stable.However, this will not be necessary in our simple numerical experiments.

First 2D model: a mixture of Gaussians
We consider here a mixture of Gaussian in dimension 2. The potential is of the form: with some parameters m x , m y ∈ R, s x , s y > 0. This is a classical model in statistics for multimodal problems.Here we take (m x , m y ) = (4, 0) and consider two cases, either (s x , s y ) = (3, 1) or (s x , s y ) = (1, 3), see Figure 1 and 2. In those figures, we represent a typical trajectory of the noisy ISD and of the ISP, with ε = 0.03, the final time T = 10, and ν = 0.2 in the case of the ISP, as well as the level lines of the potential.In these two cases, there are two local minimizers (x 1 , 0) and (x 2 , 0) separated by a unique saddle point (z, 0), where x1 < z < x2 , but they differ by the direction taken by the saddle search in the well from the left.The yellow part represent the trajectories of the noisy ISD, or the part of the ISP where I t = 1, and in blue the part of the ISP where I t = 0 (corresponding to a gradient descent).
According to the Large Deviation Principle, for a small ε, if the initial condition corresponds to the right local minima (x 2 , 0), the overdamped Langevin process (1) will typically stays in the vicinity of this minimizer for a time exponentially long with ε, and will likely go from there to the other minimizer by following in reverse the trajectory of a gradient descent starting from a point arbitrarily close to the saddle point (which in the present case is a straight line from the minimum to the saddle point), see [13, Chapter 4, Theorem 2.1] as well as their computation of the quasi-potential.
In the case (s x , s y ) = (3, 1), with some initial condition (x 0 , 0), z < x 0 < x2 , the noisy (and deterministic) ISD will follow the same reactive trajectory as the overdamped Langevin process, the gentlest way to go to the saddle point being here along the gradient, but it happens much earlier (since this is the behaviour of the deterministic system at ε = 0 and not a large deviation from it).However, in the case, (s x , s y ) = (1, 3), the gentlest way to leave the right minima is to start along the directions (0, 1) and (0, −1), see figure 2. Nevertheless, independently from this difference in the beginning of the trajectory, in both cases the saddle point is found by the deterministic dynamics for initial condition between x1 and x2 .
For initial condition (x 0 , y 0 ) with x 0 > x2 , the ISD goes to infinity.Hence, the dynamics has to be modified outside some compact set, as explained in the introduction of this section, so that the process does not escape to infinity.In that case, one only need to wait for the process to enter the domain of attraction of the saddle point for the deterministic ISD.
However, as an alternative way to enforce stability in order to get quantitative results, in the present case with a simple illustrative purpose, we simply use a periodized version of the potential.Denote Ũ1 (x, y) = − ln 1 2 e −L 2 (sin 2 (x/L)+sin 2 (y/L)) + 1 2 e −L 2 (sx sin 2 ((x−mx)/L)+sy sin 2 ((y−my)/L)) , where L > 0 is a parameter, see Figure 3.This periodized potential has two supplementary saddle point, but for all (x, y) ∈ R 2 , lim L→∞ Ũ1 (x, y) = U 1 (x, y), and we may defined the ISP on (πLT) 2 .Using a 2-D histogram, ν = 0.1, ε = 0.05, we get a representation of the invariant measure in Figure 4. We see that the invariant measure charges both minima, as well as the saddle point between the two.Now, if the goal is to find both minima, then one can find one of them using a gradient descent, and the second one using the ISP.Since the ISD allows to find the saddle point, then if the switching rate is low enough, the switched process will do as well, and thanks to the        Brownian noise, after a first switch, will have a positive probability to be in the domain of attraction for the gradient descent of the second minima, and thus will find it.If it was not in the right domain of attraction, then the process will go back to the first well, and repeat the same kind of trajectories.The success of the algorithm is then the same as the one of a rigged coin flip.For a small ε, as long as the transitions from one well to another are driven by this switching behaviour, the transition time behaves much better than the exponentially long time of the overdamped Langevin dynamics (with Brownian-driven rare transitions).At a fixed ε, when the switching rate becomes too small, the process has to wait for switching events to cross the saddle (and eventually when it becomes very small the process behaves like the overdamped Langevin dynamics and the transitions are driven by the small Brownian noise); on the other hand, a large switching rate induces an averaging phenomenon (the drift along the gentlest ascend direction vanishes), which impairs the efficiency.Thus, there should be an optimal switching rate in terms of mean transition time.This is indeed what we observed, see Figure 5.The numerical experiment were conducted with ε = 0.03, (x 0 , y 0 ) = (4, 0), I 0 = 0, and the time was estimated by Monte-Carlo with n = 15 repetitions.

A 2D model with singular lines
Now we are interested in the following potential: for (x, y) ∈ R 2 , see Figure 6.The deterministic ISD and GAD have been studied for this potential in [24].This potential has two minima, (−1, 0) and (1, 0), and one saddle point (0, 0).However, there are two lines, x = ± 4/6 , for which the Hessian of U has two equal eigenvalues.The deterministic GAD and ISD cannot cross those lines, and go to infinity, see Figure 7.In the noisy case, the Brownian motion may allow the process to cross the line, see Figure 8.
As seen in figure 8, the Brownian motion allow to cross the set of singularities, but for small ε, this crossing may happen far from the set of minima and saddle point, because the deterministic process satisfies lim t→∞ y t = +∞ for x 0 > 4/6 and y 0 > 0, where (x t , y t ) is the ISD.The singular lines are repulsive, and for x > 4/6, the process is attracted to the line {x = 1}.Concretely, for the process to cross the singular line, the noise must be great enough so that this occurs before the simulation stops due to numerical limits.Since the existence of this singularity results from a crossing between the eigenvalues of the Hessian of the potential U 2 , we study the solution to solve the issue of singularities proposed in Section 1.3, to see in particular if it allows a faster convergences towards the saddle point in our example here.Recall the set of singularities: and the modified noisy ISD (writting z = (x, y) and Z t the process): for some function f : R + → R + such that f (0) = 1, and f (r) = 2 for r r * , and some small r * > 0. As explained, the idea is to replace the reflection with respect to v 1 (x) ⊥ by a projection on vect(v 1 (x), v 2 (x)) ⊥ .
In dimension 2, the process simply becomes a Brownian motion near S.For numerical study we consider: where r * > 0 is a fixed parameter.An example of trajectory with f 1 , is given in Figure 9.We compare the speed at which the process reaches the saddle point with the different cut function f 0 = 2, f 1 , and f 2 .We chose as parameter r * = 2, x 0 = (0.9, 0), and n = 500 trials.The average of τ = inf {t 0, |X t | < 0.1} , over the n trials is displayed in Figure 10 for ε ∈ {0.03, 0.04, 0.05, 0.06}.We can see that the regularised dynamics reach the saddle point faster than the initial dynamic with f 0 .Moreover, as explained before, for smaller values of ε, there is a positive probability that the process doesn't reach the saddle point before hitting numerical limits.For a given temperature ε, this probability is lower for the processes defined from f 1 or f 2 .This possible failure was not observed in the experiments displayed in Figure 10, but for ε = 0.02, we estimated the probability of failure with 500 trials for each process.We get 0.394 for f 0 , and 0.046 for f 2 .Even at ε = 0.02, no failure was observed for f 1 .
Here we haven't used any of the modifications discussed at the beginning of Section 3 to enforce stability, in order to study the effects of the singular line on the time needed to find the saddle point.

Lennard-Jones clusters
We now study numerically the process for a potential in higher dimension.Consider the Lennard-Jones potential for N = 7 particles in dimension 2, given by where x i ∈ R 2 for all 1 i N , and for r > 0: The potential U is invariant by rotation and translation of the full system and by permutation of the particles.Once these symmetries are ruled out, it has three non-global local minima, and an additional global one, represented in Figure 11.Lennard-Jones clusters in dimension  We use as initial condition the local minimum such that U ≈ −11, 47, and we compute the time necessary for the process to visit all minima for different values of ν and ε, estimated over 20 experiments.The average time is given in Figure 12.These results confirm the interpretation of Figure 5 discussed in Section 3.1.We see that the sensibility of the exploration time to the switching rate increases as the temperature ε decreases, as should be expected: in the high frequency regime, the drift along the gentlest ascent direction averages to zero so that, in that direction, the process moves at the speed of a Brownian motion with variance ε; in the low frequency regime, the transitions between different wells are driven by the Brownian noise and thus follow an Eyring-Kramers formula, so that the mean transition time is exponentially large with ε −1 .
Since U 3 is invariant by global translations and rotations of the system, 0 is always an eigenvalue of its Hessian, associated to eigenvectors which are orthogonal to ∇U 3 .As a consequence, at points x where ∇ 2 U 3 (x) has no negative eigenvalue, the ISD dynamics is the same as the gradient descent dynamics.For a real application, these known symmetries should be taken into account in order to avoid this.However, as we saw, even without addressing this issue, the ISP still finds all minima faster than the Langevin process: even though the gentle ascend behaviour plays no role in the vicinity of minimizers, it still makes saddle points locally attractive for the dynamics, which is a key point for the efficiency of the exploration.
vs vts )H vs ds .wherem = u/γ .By hypothesis on H, the second term is 0. v depends on γ by definition, but it is ∇U ∞ -Lipschitz for all γ > 0, hence the first and third term are going to 0 as γ → 0, uniformly on 0 u T .The map v → (I − vv t )Hv is C 1 on a compact set, hence is Lipschitz.Thus we have: v(u)−w(u) L u 0 sup r<s v(r)−w(r) ds+ u 0 (I − vs vt s )∇ 2 U (x)v s ds − u 0 (I − vs vt s )H vs ds , (a) Evolution of U along the noisy ISD.(b) Trajectory of the noisy ISD.(c) Evolution of U along the switched process.(d) Trajectory of the switched process.

Figure 5 :
Figure 5: Time to reach a minima from the other one on the periodized potential.

Figure 6 :
Figure 6: Level line of the potential U 2 .
(a) Evolution of U along the deterministic ISD.(b) Trajectory of the deterministic ISD.

( a )
Evolution of U along the noisy ISD.(b) Trajectory of the noisy ISD.

Figure 10 :
Figure 10: Time to reach the saddle point.

Figure 11 :
Figure 11: List of minimizers of U

Figure 12 :
Figure 12: Time to visit all minima of the Lennard-Jones cluster.