Declining neighborhood simulated annealing algorithm to robust fixture synthesis in a point-set domain

Fixture synthesis addresses the problem of fixture-elements placement on the workpiece surfaces. This article presents a novel variant of the simulated annealing (SA) algorithm called declining neighborhood simulated annealing (DNSA) specifically developed for the problem of fixture synthesis. The objective is to minimize measurement errors in the machined features induced by the misalignment at locators-workpiece contact points. The algorithm systematically evaluates different fixture layouts to reach a sufficient approximation of the most robust layout. For each iteration, a set of previously accepted candidates are exploited to direct the search toward the optimal region. Throughout the progress of the algorithm, the search space is reduced and the new candidates are designated according to a declining probability density function (PDF). To assure best performance, the DNSA parameters are configured using the Technique for Order Preference by Similarity to Ideal Solution (TOPOSIS). Moreover, the parameters are set to auto-adapt the complexity of a given input based on a Shanon entropy index. The optimization process is carried out automatically in the computer-aided design (CAD) environment NX; a computer code was developed for this purpose using the application programming interface (API) NXOpen. Benchmark examples from industrial partner and literature demonstrate satisfactory results.


Introduction
Fixture refers to the device used to immobilize and localize the workpiece during machining. Elements of fixtures include many components such as locator pins, locator pads, supports, and clamps. Locators are critical components that used to maintain the proper position of the workpiece. Errors in locators-workpiece contact points (known as localization source error) can add to the formation of workpiece positional errors. Positional errors are measurement errors in the machined features (features of strict tolerances requirements are of main concern). To reduce the impact of these errors, locators should be robustly positioned on the workpiece surfaces. Stated differently, the exterior surfaces of the workpiece are assumed to be a vast collection of discrete points.
These points represent all available positions for locator placement. In fixture synthesis, we deal with the allocation of a given number of locators on these available coordinates.
The problem of fixture synthesis is of combinatorial complexity [1]. Thus, using a simple combinatorial iterative test for all possible candidates is unpractical even for a small number of locators. Fixture synthesizing problems have been conquered in the literature using traditional methods such as linear and nonlinear programming [1][2][3]. However, these methods do not guarantee a global or near-global solution. Additionally, most of them are highly dependent on the selection of the initial solution. On the other hand, non-traditional methods such as evolutionary algorithms are capable to surpass these drawbacks. Studies found in the literature regarding the fixture synthesis problem have been specifically focused on the use of genetic algorithm (GA) and ant colony algorithm (ACA) [4][5][6][7][8][9][10].
This paper proposes a new automated approach for robust fixture synthesis in a point-set domain carried out in the computer-aided design (CAD) environment [11]. The optimization is carried out using a new algorithm called declining neighborhood simulated annealing (DNSA). The name declining neighborhood reflects the generating mechanism of the next candidates which will be discussed in detail. The framework of this study is illustrated in Fig. 1. A fixture model is drawn on the kinematic analysis and mechanics of rigid bodies as in previous studies [12,13] but with a different objective function (cost function). The inputs to the DNSA are point-set domain and weighted reference points. Point-set domain is simply a discrete set of points (extracted in the CAD software NX) for all possible coordinates on workpiece surfaces that are feasible for locator placement. Weighted reference points are essential for the evaluation process which will be discussed in Sections 3 and 4. The algorithm output is expected to be equal or sufficiently close to the global optimum.
Section 2 discusses some related works. The fixture model and the objective function are presented in Section 3. Section 4 explains the discretization approach. Section 5 outlines the proposed DNSA algorithm. Setting the algorithm parameters is explained in Section 6. The result and discussion of benchmark examples are presented in Section 7. Finally, Section 8 concludes this paper and suggests possible future research direction.

Related works
The automatic configuration of machining fixtures includes two kinds of problems: fixture analysis and fixture synthesis [14]. Fixture analysis emphasizes the performance evaluation of a certain fixture schema under a set of fixturing requirements such as deterministic locating and total fixturing [15]. Fixture analysis theories are built on the classical screw theory [14], or geometric perturbation techniques [16]. On the other hand, fixture synthesis is concerned with the design of fixture layout to fulfil given requirements. Brost and Goldberg [2] presented one of the early works in synthesizing the arrangement of three locators and one clamp for polygonal parts. Phoomboplab and Ceglarek [17] proposed a design synthesis framework based on a new hybrid design structure matrix and task flow chain for dimensional management in multistage assembly. The design tasks are tolerance optimization, multi-fixture layout design, and partto-part joint design. Zheng and Qian [18] investigated the use of 3-D modular fixtures for complex objects. They developed an algorithm to determine the optimal fixture element locations based on localization accuracy and immobilization capability criteria. On the whole, a review paper reported that the number of studies in the scope of fixture synthesis is fewer compared to fixture analysis [10].
In the context of fixture model, three workpiece models are commonly used for fixture layout optimization: rigid body model, fixture contact point elastic model, and workpiece elastic model. Yang et al. [19] have developed a rigid body model to compensate for datum errors during machining with general fixture layouts. Dan Ding et al. [20] proposed an algorithm based on a ray-shooting technique to search for optimum form-closure grasps of hard fingers on a surface of a 3-D object. Satyanarayana and Melkote [21] have employed workpiece elastic deformation and reaction forces to predict machining surface error and stability of fixture-workpiece system. Raghu and Melkote [22] demonstrated the effect of clamping sequence on workpiece location error using a workpiece-fixture elastic contact point model. Krishnakumar and Melkote [5] and Kaya [4] optimized the fixture layout using simple GA and a workpiece elastic model. The objective is to minimize the workpiece elastic deformation under static loading conditions. Traditional methods for fixture layout optimization have been reported by many studies. Wang and Pelinescu [1] presented an iterative optimization method based on sequential optimal deletion in the point-set domain for fixture synthesis. The objective is to meet two performance criteria: the norm and dispersion of the locator contact forces and the workpiece localization accuracy. Brost and Goldberg [2] synthesized the modular fixtures using an iterative algorithm. Their algorithm grows with an order of O(n 5 ) where n represents the number of edges multiplied by the maximum diameter. Zheng and Chew [3] proposed a graphical approach based on the simplex distance to hold an object in a form closure while enhancing immobilization capability. The aforementioned fixture design methods have many drawbacks. For instance, they may lead to a local minimum solution. Moreover, some of these methods apply an exhaustive iteration to find the best solution which is not suitable for a vast range of candidates.
Heuristic methods found many applications in fixture synthesizing and analyzing problems. Vallapuzha et al. [9] compared the performance of four fixture layout optimization algorithms, discrete GA, continuous GA, discrete SQP, and continuous SQP. Prabhaharan et al. [23] Fig. 1 Study framework diagram applied ACA and GA algorithms separately to the fixture optimization problem. The model used finite element analysis to minimize the dimensional and form errors. The results report faster and more accurate solutions for ACA. Padmanaban et al. [7] used ACA to optimize the fixture layout design; the objective is to reduce elastic deformation. The algorithm was tested in both continuous and discrete domains with results prove the superiority of the former.
As far we know, this work is the first to implement a new variant of the simulated annealing (SA) for the discussed problem. SA holds many advantages which makes it suitable for fixture synthesis. For instance, the initial set of candidates has no significant impact on the final solution. Additionally, it has been theoretically proven that the SA is approaching a probability of one to reach the global optimum as long as the annealing schedule is extended [24]. The proposed algorithm has been incorporated in the CAD software NX as a subroutine which contributes to CAD-based methods for automated fixture design.

Fixture model
Before the construction of the optimization method, the objective function is formulated based on a rigid body fixture model. The fixture model is used to map the relationship between the localization source error and the workpiece positional error. Localization error is assumed to result from three sources of variation: variation due to locator setup errors, variation due to manufacturing default of the locators, and variation due to manufacturing default of the workpiece [13]. In this work, locators are assumed to be sufficiently represented as points on a surface. For simplicity, the minimum number of locators is used to constrain a 2-D workpiece in a deterministic locating schema, i.e., three locators are used [12].
The source variation vector for 3 locators is denoted a s R= x 1 y 1 x 2 y 2 x 3 y 3 T w h e re x i a n d  [13,15] the matrix B is found to be dependent on the unit normal vectors and the spatial coordinates at locators-workpiece contact points: where R and , The notations n ix , n iy , i = 1, 2, 3 are the unit normal vector components at the locating point (x i , y i ) . For a 2-D workpiece if the Jacobian matrix ( ) rank is less than three then the matrix is uninvertible which means the workpiece is not deterministically located [12].
To assure the robustness of the fixture layout, locators are arranged properly in a way that minimize the variation in positional errors. In this study, a typical localization source error is assumed to follow an independent normal distribution; therefore, the variation of positional error is: Equation 5 is used to quantify the variance for both transitional and rotational error at a specified reference point. The focus in this context is on the transitional error and not the rotational error. Hence, the matrix element a 3,j 2 , which reflects the rotational error, is dropped. In practice, we can either count for the resultant transitional error in the whole part or exclusively draw focus to critical feature(s) in the workpiece. Thus, instead of computing for one reference point, a set of reference points that fully depict these features is computed for transitional error, and the objective function is set to be the summation of matrix elements (a i,j ) at all reference points: Reference points, also known as key product characteristics points (KPCs) in other related studies, are automatically generated and individually assigned a specific weight w ref . The generating mechanism and the calculation of these weights will be described in Section 4. In brief, weights are indicators of how close a reference point to the feature of interest. The new objective function is then: So far, the kinematic analysis used to derive the objective function 7 takes the mobility of bodies in contact as a function of surface normal, which implies a disregard for the local motion at locators-workpiece contact points. This treatment is sufficient for workpieces of plane surfaces. However, considering that positioning a locator at a curved region of the workpiece introduces more error. Hence, employing the 2nd-order mobility index (curvature) becomes essential [25]. Hereafter, we adopt using a correction value for curvature as proposed by Cai et al. [15]. The correction value is defined by Eq. 8 where i , i = 1, 2, 3 refers to the Gaussian curvature at locating points and R is the mean dimension of the workpiece. These corrections are applied to the objective function, specifically, to the source variation vector rection C i at a locating point can be interpreted as an error amplification coefficient; when a locator is placed at a linear surface the curvature is zero and C i = 1 , while for a curved surface the correction value is more than one which results in a higher positional error. The new objective function with the embedded curvature correction is given by Eq. 9.

Discretization
The fixture synthesis problem deals with the placement of locators at the workpiece surfaces. For 2-D workpiece geometry, the positioning is at the boundary edges of the face entity of the workpiece CAD model. The boundary edges are converted to a set of discrete points which are indexed and maintained in a string to be referred as the point-set domain. Discretization is achieved by firstly converting the face entity into a set of uniformly spaced points and then isolating the boundary path as shown in Fig. 2. The uniform mesh parameter is the constant which denotes the step incrementation. This method of uniform mesh discretization is simple to perform and it allows workpiece of arbitrary shape to be discretized with minimal complexity [26]. The discretization method is built with the following assumptions: 1) workpiece boundary is smooth; 2) distance between nodes ( ) is so small so it does not significantly affect the accuracy of the solution.
The calculation of the mathematical attributes (first and second derivatives, unit norm vectors, and curvature) for all points in the set domain can be explained by taking one instance point. Figure 3 shows an instance point in the pointset domain with inline adjacent points. Two hypothetical adjacent points are the base for approximating attributes. These two points are located on the two forward and backward fitting lines which are determined by interpolating adjacent points in both sides of the instance point (four points on both sides in this study). The distance between each hypothetical point and the instance point, denoted by , is the mean distance between all pairs of two adjacent points in the boundary. The first and second derivatives at an And the unit normal vector is simply determined as shown in Fig. 3.
The obtained attributes from the previous computation contain much noise. Counting on the assumption of smooth boundary, we further apply a smoothing by bin for all attribute sets. In smoothing by bin, each attribute value is substituted by the mean value of a bin. A bin composed of the attribute values of the instance point, the previous 3 points, and the next 3 points. Figure 4 shows curvature values for an eclipse shape before and after the smoothing by bin.
Generating of reference points Reference points are representative points for a given feature and are used to estimate the resultant transitional error in the neighbor region. For a 2-D workpiece, a feature is recognized and bounded by a rectangular as shown in Fig. 5. The selection of reference points for any feature is performed in two rounds; in each round, reference points on the feature boundary are chosen according to the following rules: Round-one: four points located at the feature boundary having the minimum distance from the four corners of the bounding rectangular are selected. Round-two: if any edge of the bounding rectangular does not contain at least one point that is selected from the previous round, then pick one point at the intersection of the edge and the feature boundary.
If round one yields at least one point located at each edge of the boundary rectangular then the new round produces no extra points, as in rectangular shape. However, if no point of the first round is placed at any edge of the bounding rectangular, then the new round will yield four points, as in eclipse shape. Other cases lie between these two cases.
Reference points weighting mechanism The weight of any reference point is an indicator of its significance to function as a delegate for the feature(s). To estimate the values of these weights, the feature boundary edges are discretized in the same manner shown before (Fig. 2). Then, the contribution of each non-reference point is measured based on the closeness of this point to the reference point. weight of a reference point is the summation of all points contributions given by Eq. 10, where d ij denotes the distance between a reference point i = 1, 2, … , n , and a point j = 1, 2, … , m in the boundary edge of the target feature.
In practice, the workpiece can have many features of strict accuracy requirements. These features must be the base for computing the cost function. Accordingly, reference points are selected for all features separately. Then, for convenience, all weight values are normalized by dividing each weight by the sum of all weights.

Optimization algorithm
Simulated annealing is a probabilistic technique that was firstly introduced by Kirkpatrick et al. [28] and Černý [29] separately. It belongs to the class of Monte Carlo methods [30]. SA is built on the simple greedy search algorithm known as hill climbing. In hill climbing, the algorithm starts with an arbitrary value, then it moves toward an increased value to search for a better solution. The algorithm terminates when no further improvement is found. The main drawback of hill climbing is that it may get stuck in a local optimum. On the other hand, SA employs a probabilistic technique to overcome this drawback.
Comparison of both algorithms can be demonstrated by the simple example of a rabbit hopping around in an attempt to find the lowest point in a valley: Hill-climbing algorithm: The rabbit successively jumps to a place lower than its current level until it reaches the �dij� lowest point not far away. This point, however, is not necessarily the lowest in the valley. Simulated annealing algorithm: The rabbit is drunk; it starts jumping randomly it then gradually becomes sober and jumps toward a lower direction until it reaches the lowest point or near the lowest point in the valley.
The concept of SA is drawn from a well-known physical phenomenon called annealing. In thermodynamics, annealing refers to the process of slow cooling where an object gradually cools down from an initial state of high temperature aiming to increase the size of crystals and reduce defects. During the slow cooling, the microstructure of the material is gradually altered so that molecules can have enough time to settle. This process eventually reaches a crystallization form which corresponds to the lowest energy state of the object. Reversely, if the process cools too fast (also known as "quenching"), this results in an amorphous form of the martial which is not the lowest energy state. In short, the lower the cooling rate, the lower the final energy state of the object.
SA algorithm mainly applies the theory of thermodynamics annealing to statistics. Every point in the search space can be visualized as a molecule in the air with a specific free energy value. Search space points, like the air molecule, also have "energy" to indicate the suitability of the point to the problem. The algorithm starts searching for an arbitrary point in the space. In each step, a "neighbor" is selected first, and then the probability of reaching the "neighbor" from the existing location is calculated. If the probability is greater than a given threshold, jump to the "neighbor"; if not, stay in the original position.
In minimization problems like in fixture synthesis, if a new solution is accepted, it is called a downhill move. If the new solution does not satisfy the acceptance condition, then it is subject to a probability function to decide whether to accept the new candidate. Following this strategy, SA shows efficiency in escaping a local minimum. The probability of accepting a new candidate is given by Eq. 11. ΔE denotes the change in the objective value. In the hill-climbing algorithm, the probability of accepting a new candidate when ΔE> 0 is zero; however, in SA it is equal to the metropolis criterion exp(−ΔE∕k b T) [30].
Where T is the current temperature andk b is the Boltzmann constant which is calculated in every iteration by setting the metropolis criterion equals to one atT = T max and ΔE equals the minimum nonzero value obtained so far. Through a detailed examination of the acceptance probability equation, we can notice two conditions. First, as the solid cools down, the probability of accepting a worse candidate gets smaller. Second, if the change in energy (cost value) ΔE is high then the probability of acceptance is accordingly low.
So far, we have discussed the general principles of SA. In this study, we propose the DNSA algorithm which is a new variant of the original algorithm, referred as standard simulated annealing (SSA). The new algorithm is developed specifically for the current problem of fixture synthesis. The proposed algorithm differs from the SSA in two aspects: First, DNSA exploits a set of previously accepted candidates to predict the next move. Second, the generation mechanism of new candidates is governed by a declining probability density function (PDF). To put it differently, a group that keeps records of the last accepted candidates is employed to direct the search area, more elaboration is in Section 5.3.

The proposed algorithm
DNSA algorithm mainly contains two loops; the outer loop in which both the temperature and the PDF domain size parameters are updated for each iteration. The temperature starts from an initial maximum temperature T max and decreases gradually with the progress of the algorithm according to a cooling schedule. The PDF domain size parameters control the range of possible inputs. The range begins with the full string size of the point-set domain and decreases gradually. The inner loop is the epoch loop where several iterations are repeated at the same temperature and PDF domain size. Evaluating and selection of candidates are carried out in this loop as follows: First, a new candidate is generated (see Section 5.3). Second, the objective value of the selected candidate is computed using Eq. 7 when curvature effect is disregarded, or Eq. 9 when considering a correction for curvature. Then, the candidate is appended to the group of last accepted candidates based on the acceptance criteria of Eq. 11. The group capacity is set to a specified maximum number of candidates (have been set equal to 10, see Section 6.1). When the group reaches its maximum limit, the oldest candidate is dropped to make space for the new one. If the group members are altered, then the PDF mean values are recomputed based on the new values. Finally, termination criteria are checked at the end of each iteration (see Section 5.4).

Cooling schedule
The cooling schedule plays a critical role in DNSA. Fast cooling rate may result in a local optimum. Conversely, a slow cooling rate causes the optimization process to last longer. The cooling rate depends on a set of parameters, specifically, Initial temperature, final temperature, and the number of iterations, denoted as T max , T min and d, respectively. Many cooling rate formulas have been investigated in the literature [31]. In this paper, three cooling rate formulas were tested in contiguity with other parameters (see section 6.1). The selection is made to the exponential multiplicative cooling rate formula given by Eq. 12.
The cooling rate coefficient denoted by k c is obtained by setting T i =T min at d = d final where d final is the last iteration. The used formula permits longer execution time at the lower temperature stage (final stage) compared to the higher temperature stage (early stage). This assures intensive search at the end of execution to further refine the solution.

Generation mechanism
The generating of new candidates is mainly controlled by a PDF function most of the execution time. A separate PDF is assigned for each locator. To simplify, the PDF is applied over the point-set domain to assign a selection probability for each index in the string. A set of control parameters are used to control the PDF behavior. Among them, there is the mean parameter that is used to direct the search area and its value is estimated based on the last accepted candidates. The higher the number of candidates used to compute the mean, the more accurate is the estimation of the best searching zone. At the early stage of the algorithm execution, the group holds a minimum number of candidates. Therefore, the estimation of the searching zone is not accurate due to high uncertainty in the mean value. To solve this, a preliminary phase (first phase) precedes the main phase of PDF generating mechanism (second phase) as in Fig. 6. The first phase generates new candidates according to uniformly distributed probability. This phase continues until the number of lastly stored accepted candidates (group size) reaches the maximum limit. At an early stage of the second phase, the selection space covers a wider range in the point-set domain, and the searching mechanism is characterized as diversification. On the other hand, last-stage iterations are characterized as intensification and the focus is drawn to a narrower zone. During this narrowing transition of the searching space, the solution gets closer to the global optimum. The PDF is designed to harmonize this transition between the two stages as shown in Fig. 6. In this context, we have proposed a set of piecewise equations (Eq. 13) to mathematically depict the declining transition: This set of equations is for one half of the PDF profile and the other half is simply the inverse as shown in Fig. 7. The x i value here indexes the point to be selected as the new candidate. The parameters z, c, a, and l are characteristic constants that control the shape and size of the PDF. There exist an infinite number of alternatives to adjust these parameters. In this study, we intuitively set the parameter values without extra complexity as given in Table 1. To generate a random point according to the probability defined by the given PDF, we first integrate the previous piecewise equation to get the cumulative probability distribution: or, in reverse form, To obtain the value of x i . First, the variable y is generated randomly following a uniform distribution within the range 0 >y>zc+ z l (1−e l(c−a) ) . Then, substituting y in Eq. 15 yields the random x i that follows the specified PDF.
In general, the PDF function is controlled using only the parameters m, a, and c while parameters z and l are maintained constant (m refers to the mean value, whereas a and c are the domain size control parameters). Mean associated with the PDF of the loc. th locator is estimated using the inverse distance weighted (IDW) interpolation (Eq. 16) applied on the group of last accepted candidates [32]. Weights w ji are the reciprocal of objective values for the (14) y= The reduction rate of the upper portion of the PDF (see Fig. 7) is square the cooling rate.
a reduction_rate k c The reduction rate of the lower portion of the PDF (see Fig. 7) is identical to the cooling rate. z 100 Roughly estimated.
Computed so that the lower ends of the equation plot will have a separation distance equal to 1 from the x axis.

Terminating criteria
The algorithm has two terminating conditions. The first condition is met when the number of iterations reaches the upper limit of iterations, i.e., the temperature minimum value T min . However, in most cases, the best solution is found halfway to this limit. Therefore, a second termination condition is set which is triggered when no improvement is observed in a row of iterations; a counter for the number of successive rejected candidates is used to hit the termination criterion when reaching a predetermined maximum value. Figure 9 shows empirical data of the number of iterations before termination for different point-set domain sizes. As the domain gets larger the termination occurs at a later stage of the execution. For large domain size, the algorithm may not terminate according to the second condition and the improvement will continue to appear but with insignificant change in the objective value; thus, the algorithm is forced to terminate when it reaches the maximum number of iterations (first termination condition). SA performance is highly dependent on the control parameters such as maximum temperature, number of iterations, number of epochs, and cooling rate [33]. However, as long as an SA-based algorithm runs for a long time, the setting of these parameters may not be a critical issue [34]. Adopting parameters that lead to a long execution time may yield an accurate solution, though it is unpractical to go through all possible combinations of the design variables. Hence, a trade-off has been made between accuracy and execution time in favor of accuracy. First, a systematic method is applied to set up the algorithm parameters for one specified input. then generalization has been made to adapt any type of input.

Initial configuration of parameters
In this study, the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is adopted to set up the DNSA parameters [35]. The TOPSIS was firstly proposed by Hwang and Yoon [36] and modified later by Hwang et al. [37]. It is categorized under the multiple attribute decisionmaking techniques (MADM).
Before applying the TOPSIS method, we perform a preevaluation phase to determine the best range of values for the control parameters at which DNSA appears to have a good performance. This is done by testing all parameters individually while maintaining an average estimation for the other parameters. The parameter alternatives are then arranged in all possible combinations and tabulated as in Table 2. Cooling rate indexes (1, 2, and 3) refer to three tested formulas: exponential multiplicative cooling, logarithmical multiplicative cooling, and linear multiplicative cooling, respectively. Group size is the maximum number of last accepted candidates that are maintained to direct the search region. The performance of the DNSA is evaluated using three criteria: 1. The quality of the final solution, i.e., how close is the final solution to the global minimum. 2. The algorithm execution time. 3. The stability, i.e., measurement consistency of the previous two criteria.
For each alternative in Table 2, the algorithm was made to run 30 times to acquire some measurements for the evaluation process. Specifically, four measurement metrics were taken during these runs to quantify the three criteria. These metrics are the average and deviation in the executing times as well as the average and deviation in the objective values, denoted as t , t , F , and F , respectively. The two average metrics quantify the first two criteria, whereas stability is quantified by the two metrics of standard deviation. Comparing different alternatives using standard deviation and average values is acceptable since all test runs are subjected to the same input. After estimating the four metrics, the metric's normalized score for each alternative is computed using vector normalization. The combination of alternatives and normalized performance indicators is known as the evaluation matrix. Each normalized performance indicator is then multiplied by a specific weight that indicates the significance of the criterion. Weights are estimated via analytic hierarchy process (AHP) [38]. The importance of each performance indicator is quantified on a scale from 1 to 9. The outcome of TOPOSIS is ranking values of all alternatives based on their distance to the positive ideal solution (PIS) V + j . The ideal solution is a hypothetical solution that hits the best score in each criterion. On the other hand, negative ideal solution (NIS) V − j refers to the distance to the worst solution. The distance of each alternative to the NIS and PIS is computed by Eqs. 17 and 18. Finally, the alternatives are ranked according to their closeness C i using Eq. 19.
The evaluation process using TOPSIS yields an optimum configuration as seen at the row associated with the rank value equal to 1 in Table 2. So far, all alternatives have been evaluated under the same input, yet the obtained configuration might not be preferable for other type of inputs [33].

Adaptation to input complexity
Intuitively, inputs of complex shapes require a relatively longer execution time to reach the accuracy level for those of simple shapes. To study the algorithm performance under different inputs. First, we specify a measurement metric to quantify the complexity of an input. Then, the algorithm is left to run 30 times for each input to observe the dispersion in the final solution. The complexity level is measured using Shannon entropy to indicate the diversity in some raw information, i.e., first derivative values. Eq. 20 is used to calculate the entropy in first derivative values for each input where P i is the occurrence rate of a value in the set of all values at a specific measurement resolution [39]. is, therefore, equal to log 2 (1 − (−1)∕0.01) = 7.644 . Inputs of different entropies are compared to another metric for dispersion; coefficient of variation (CV) is sufficient for this purpose considering that we are comparing different inputs.
Empirical results prove the linear proportionality between the complexity metric (H) and the result variation metric (CV) as illustrated in Fig. 10(a). For sake of stable performance over different entries, the number of iterations is made to auto-adapt the complexity of a given input. In other words, the number of iterations is increased by a factor so that the new number of iterations equals d (1 + k h ) . The coefficient k h is proportional to the entropy value and is determined empirically by testing the performance of one input over a range of k h values. Figure 10 shows empirical data for input of entropy value H=4. 6. In this study, CV values less than 1 are accepted which corresponds to a coefficient value equal to 0.8. Extrapolating for other entropy values, the coefficient is approximated as k h = (0.8∕4.6)×H . The number of iterations results from the TOPSIS method is 300, accordingly, for the input of maximum entropy value the number of iterations is then equal to k h_max + 1 ×300 =((0.8∕4.6 × 7.644)+1)×300 = 698.
The effect of the increment parameter on the entropy index calculation is illustrated in Fig. 11. The entropy was computed for a rectangular workpiece at different level of values. Theoretically, the entropy index for the rectangular is equal to 2 1 2 log 2 (2) = 1 . However, under the given discretization method the entropy index is approaching the theoretical value as the parameter is getting smaller ( lim →0 H = 1 ). This can be interpreted by the fact that at higher value of , the inaccuracy of computing the attributes at singularity regions (i.e. rectangular corners) becomes noticeable; hence, points close to the corner are now comparable to the whole number of points which cause the high value of entropy. Other dependencies of the increment parameter will be discussed at the end of Section 4.

Part one
The examples in Table 3 are CAD models of mechanical parts to be machined. These parts contain features of high measurement accuracy demands (highlighted in blue). The optimization process is carried out for each part to obtain the optimum fixture layout that most satisfies the requirement of robustness. During machining, undesirable infinitesimal movement pivoting on the locating points can distort the relative distancing of these features. Placing the locators at some favorable locations can assist in diminishing the impact of this source of error. Accordingly, reference points are chosen to represent these features which will direct the computation of the objective value. Based on the complexity index of each CAD input, the algorithm number of iterations is adjusted accordingly to assure satisfactory accuracy of the obtained result as explained in Section 6. Throughout the examples, the proposed method has efficiently generated the optimum fixture layouts. Different runs may result in different layout alternatives though all are acceptable selection since their objective values are close to each other. The algorithm execution time has been analyzed for several runs with different inputs; the main influencing factors on the consumed time is found to be (entropy index, number of reference points, and a third factor relevant to the machine used to run the algorithm). More discussion is conducted in the second part of the examples where more focus is drawn to the quality of the obtained result.

Part two
For comparison, a set of benchmark examples (Fig. 12) drawn from the state of the art [2,5,11,15,40] are presented in the subsequent lines. Numerical values using our method are provided for the workpieces (a) and (b), while (b) and (d) are left for qualitative comparison. Reference points for these examples are chosen to represent the entire workpiece and no specific inner features are considered.
In the rectangular workpiece example (Fig. 12. a), the obtained fixture schema is similar to that in [5] but differs slightly in the coordination of each locator. In our model, the workpiece is assumed to be a rigid body. Running the optimization using this model results in positioning the locators L1 and L2 close to the corner. On the other hand, the aforementioned study employed finite element analysis which counts for inner deformation. As a result, the locators L1 and L2 in the former mentioned study tend to be positioned far from the corners. This can be interpreted by the fact that corners are more brittle and vulnerable  Fixture layout _ to fracture therefore not suitable for locator placement.
Adopting models that use FEA may result in a more accurate solution for workpiece with significance degree of inner deformation. Nevertheless, employing FEA holds high computational cost, and for each candidate, we need to generate a new mesh for the evaluation. On the contrary, rigid body models are fast to compute and suitable to implement as a subroutine task in a CAD environment. In the hexagon workpiece (Fig. 12d), the result of our study is almost identical to that found in [2] and [11] since all are built on the same concepts of kinematic analysis of rigid bodies but with different objective functions. The aero-foil cross section example (Fig. 12c) is brought from the study [40] which concluded with three preferable fixture layouts. The fixture layout results from the objective function 7 is similar to the 4 th fixture layout found in the aforementioned study. A better result is obtained by using the objective function 9 which count for curvature. The obtained fixture layout is observed to have the locators ( L1 ′ , l2 ′ , andL3 ′ ) drifted away from regions of high curvature as expected. In the eclipse workpiece (Fig. 12b), the result obtained from the objective function 7 as well as the objective function 9 is similar to what is found in [15] (referred in the study as infinitesimal error analysis and small error analysis, respectively). In the context of the optimization method, the former study has specified an initial position and optimization direction for each locator before carrying the optimization process. In our study, however, no starting points nor searching direction is required though it is possible to constrain the placement of each locator to a specific region. The results of DNSA using the two objective functions are shown in Tables

Exploration and exploitation
Exploration and exploitation are essential characteristics for any heuristic algorithm. While exploration indicates the discovery of new regions of the search space, exploitation is carried within a narrower region close to the previously visited points. A trade-off between exploration and exploitation is necessary for any successful algorithm [41]. This has been implicitly achieved in our algorithm through the use of TOPSIS method. A transition from a large neighborhood search (exploration stage) to a small neighborhood search (exploitation stage) is the main characteristic of the DNSA algorithm.

Memory usage search
A very powerful element in metaheuristic algorithms is the usage of memory. In other words, taking advantage of previous records of accepted candidates to guide the search for new candidates [42]. In SSA, the generation of the next candidate is performed according to the Markov chains rule. It exclusively utilizes the information of the current state to resolve the transition to the next state. Therefore, it is a memory-less algorithm. In our method, however, records of the last accepted candidates are exploited to guide the search space.

Characteristic graphs
The convergence of the algorithm can be visualized by a set of characteristic graphs as shown in Fig. 13. These graphs were taken for the previous eclipse example. Chart (a) shows the reduction in the acceptance probability (Eq. 11) through

Deterministic locating
As stated previously, a satisfactory deterministic locating arrangement is associated with a Jacobian matrix determinant not equal to zero. A problem that arises under the proposed discretization approach is that some indeterministic arrangement of locators are being quantified with a determinant not equal but the closest to zero among all other arrangements. In fact, this bug has no impact on the optimization process since an arrangement with low Jacobian value will result in a high objective value and, consequently, will be excluded. This is true as far as the input shape permits a deterministic arrangement of locators. Some inputs cannot accommodate a deterministic set of locators due to boundary restriction as in circle [14].
To enhance algorithm robustness, we propose the use of a threshold value to reject candidates having an absolute determinant of the Jacobian matrix |det( )| less than a threshold value. This value is roughly determined empirically using an input of a circular workpiece. A circle shape is very useful to estimate the threshold value; any arrangement of locators on the circle perimeter must have, by analytical meaning, a zerodeterminant value. This is obvious since a circle rotational degree of freedom cannot be constrained under any set of locators. Applying our approach for a circle shape input must also work in the same way and any nonzero determinant value must be regarded as zero. To estimate the threshold value, we run the algorithm on a circle of a given diameter while recording the absolute determinant values for all candidates. For noncircular shapes, the threshold is equivalent to the threshold of the minimum diameter circle that bound the entire shape.
To analyze the implications of the mesh parameter on the incorrect quantification of Jacobian determinant, we conduct the following test. Workpieces of different circle diameters are plotted against the upper bound of the Jacobian absolute determinant (Fig. 14). The threshold bound shows proportion to the size of the circle. This is interpreted by the amplification effect of the Jacobian matrix element ( n iy x i −n ix y i ) which cause the determinant to jump further with the increase of the spectral coordinates. Choosing small values for is advisable to reduce the threshold as possible.

Selecting the increment parameter value
The increment parameter should be given a proper small value to maintain an accepted accuracy level of the result. In Section 6.2, the influence of the increment parameter on the entropy index value has been illustrated. In the same context, we present here the size of the workpiece as another factor to consider when selecting the value. Figure 15 shows two cases for testing the effect of workpiece size. In the first plot (a), the variation in the result of different workpiece sizes is plotted under the same value of the parameter. As expected, a descending in the coefficient of variation is observed with the increase in the workpiece size. In the second plot (b), the workpiece size was hold constant while varying the value. The left part of the result is a bit tricky, at first sight the low coefficient of variation indicates a good constancy of the result. A deeper understanding of the condition reveals that at high value the point-set domain size is low and so is the number of alternative solutions, and this reduces the variation in the final result. In the right hand of the plot, however, the variation is reduced as anticipated after lowering the value. To sum up, a user can digest the information provided from all previous graphs (Figs. 12,14,15) to determine the best selection of the incremental value ( ). A role of the thump is to specify a value for the equal to 1/100 the maximum circle diameter that can be drawn within the workpiece boundaries.

Accuracy of the obtained result
The result obtained in this study is not necessarily the global optimum yet an adequate approximation. The source of errors that occurred during the implementation of our method can be confined to three bounded error values. The first bounded error is owing to the validity of adopting the rigid body fixture model. In this model, the deformation of the workpiece is neglected which is a fair assumption to model relative stiff components. However, in some cases the deformation is influential and it becomes essential to embed its effect into the model. Therefore, other models such as workpiece-fixture elastic contact model and workpiece elastic model might be better alternatives to represent the problem. The second bounded error is owing to the approximated representation of the workpiece surface points. In this study, a uniform mesh is used to automate the discretization process for any given 2-D workpiece. The accuracy of this representation is affected by the increment parameter , the smaller the higher the accuracy. An implication of this error has been analyzed and discussed in the former Section 7.3. The third bounded error is owing to the result accuracy of the optimization algorithm. Different runs of the DNSA algorithm often yield different results. In setting up the algorithm parameters, the deviation from the global optimum is made as minimal as possible. Two measures have been taken to constrain this bounded error which was explained in Section 6. Despite these bounding errors, the DNSA algorithm provides sufficient result in view of accuracy and performance as shown in the previous examples.

Conclusion and future work
Robust arrangement of fixture layout contributes positively to the measurement accuracy of the workpiece during machining. This paper presents a novel algorithm (DNSA) for fixture layout optimization. The aim is to find the robust fixture layout from a large space of candidates that best minimize the impact of locator localization errors on the accuracy of the machined features. The selection of candidates is subjected to a probability function over a searching space that declines gradually with the progress of execution. The main characteristic of the DNSA is its transition from an explorational state to exploitation. To reach the best algorithm performance, the algorithm control parameters were systematically configured using the TOPOSIS method taking one specific input, yet applying the algorithm for different types of inputs shows variation in performance. It is found that for any input, the algorithm performance metric (coefficient of variance of the final objective values) is proportional to the input complexity metric (based on Shanon entropy). Therefore, a stable performance was maintained for different inputs by increasing the number of iterations in proportion to the complexity metric. The proposed algorithm is proven to be an efficient technique and computable to other algorithms used in the field, i.e., GA and ACA.
This work contributes further to CAD-based methods for automated fixture design. The implementation of our approach from the discretization to the computing of the objective value was carried out using the application programming interface NXOpen and the user interface NX. To conduct the optimization for a given 2-D workpiece, the user needs only to specify the face entity in the CAD model and the features of critical measurement accuracy.
As part of future study, integrating two types of models, specifically the rigid body workpiece model and the elastic workpiece model, might be more beneficial. The result obtained from our model (rigid body-based model) is closed to that of the elastic workpiece model found in the literature. Moreover, the evaluation process is fast to compute and can produce sufficient result accuracy when deformation is not influential. However, the use of the elastic workpiece model becomes essential when inner deformation is significant as in high-duty machining, but since the evaluation of this model requires the usage of finite element analysis, their computational cost is very high and it becomes unpractical to evaluate every possible candidate for large search space. Therefore, we suggest further investigation in integrating both models so that the rigid body model is used in a preliminary phase to filter the searching space to a small number of best-fitting candidates. Then, the elastic workpiece model is used in a secondary phase to select the optimum solution while counting for inner deformation.