ll-labs.com/cm/cs/doc/96/4-02.ps.gz">Direct Search Methods: Once Scorned, Now Respectable), they are used when either the computation of the derivative is impossible (noisy functions, unpredictable discontinuities) or difficult (complexity, computation cost). In the first cases, rather than an optimum, a
not too bad point is desired. In the latter cases, an optimum is desired but cannot be reasonably found. In all cases direct search methods can be useful.
Simplex-based direct search methods are based on comparison of the objective function values at the vertices of a simplex (which is a set of n+1 points in dimension n) that is updated by the algorithms steps.
The simplex update procedure ( {@link NelderMeadSimplex} or{@link MultiDirectionalSimplex}) must be passed to the {@code optimize} method.
Each call to {@code optimize} will re-use the start configuration ofthe current simplex and move it such that its first vertex is at the provided start point of the optimization. If the {@code optimize} method is called to solve a different problemand the number of parameters change, the simplex must be re-initialized to one with the appropriate dimensions.
Convergence is checked by providing the worst points of previous and current simplex to the convergence checker, not the best ones.
This simplex optimizer implementation does not directly support constrained optimization with simple bounds; so, for such optimizations, either a more dedicated algorithm must be used like {@link CMAESOptimizer} or {@link BOBYQAOptimizer}, or the objective function must be wrapped in an adapter like {@link org.apache.commons.math3.optim.nonlinear.scalar.MultivariateFunctionMappingAdapter MultivariateFunctionMappingAdapter} or{@link org.apache.commons.math3.optim.nonlinear.scalar.MultivariateFunctionPenaltyAdapter MultivariateFunctionPenaltyAdapter}.
The call to {@link #optimize(OptimizationData[]) optimize} will throw{@link MathUnsupportedOperationException} if bounds are passed to it.
@since 3.0