ll-labs.com/cm/cs/doc/96/4-02.ps.gz">Direct Search Methods: Once Scorned, Now Respectable), they are used when either the computation of the derivative is impossible (noisy functions, unpredictable discontinuities) or difficult (complexity, computation cost). In the first cases, rather than an optimum, a
not too bad point is desired. In the latter cases, an optimum is desired but cannot be reasonably found. In all cases direct search methods can be useful.
Simplex-based direct search methods are based on comparison of the objective function values at the vertices of a simplex (which is a set of n+1 points in dimension n) that is updated by the algorithms steps.
The {@link #setSimplex(AbstractSimplex) setSimplex} method mustbe called prior to calling the {@code optimize} method.
Each call to {@link #optimize(int,MultivariateFunction,GoalType,double[]) optimize} will re-use the start configuration of the current simplex andmove it such that its first vertex is at the provided start point of the optimization. If the {@code optimize} method is called to solve a differentproblem and the number of parameters change, the simplex must be re-initialized to one with the appropriate dimensions.
Convergence is checked by providing the worst points of previous and current simplex to the convergence checker, not the best ones.
This simplex optimizer implementation does not directly support constrained optimization with simple bounds, so for such optimizations, either a more dedicated method must be used like {@link CMAESOptimizer} or {@link BOBYQAOptimizer}, or the optimized method must be wrapped in an adapter like {@link MultivariateFunctionMappingAdapter} or {@link MultivariateFunctionPenaltyAdapter}.
@see AbstractSimplex
@see MultivariateFunctionMappingAdapter
@see MultivariateFunctionPenaltyAdapter
@see CMAESOptimizer
@see BOBYQAOptimizer
@deprecated As of 3.1 (to be removed in 4.0).
@since 3.0