Package de.jungblut.math.minimize

Examples of de.jungblut.math.minimize.GradientDescent


   * @param alpha the learning rate for gradient descent.
   * @param numIterations how many iterations of training have to be done. (if
   *          converged before, it will stop training)
   */
  public void train(DoubleVector[] trainingSet, double alpha, int numIterations) {
    train(trainingSet, new GradientDescent(alpha, 0d), numIterations);
  }
View Full Code Here


    // test the linear regression case
    // use a gradient descent with very small learning rate
    MultilayerPerceptron mlp = MultilayerPerceptron.MultilayerPerceptronBuilder
        .create(new int[] { 2, 1 },
            new ActivationFunction[] { LINEAR.get(), LINEAR.get() },
            new SquaredMeanErrorFunction(), new GradientDescent(1e-8, 6e-5),
            10000).verbose(false).build();

    // sample a line of points
    Tuple<DoubleVector[], DoubleVector[]> sample = sampleLinear();
View Full Code Here

TOP

Related Classes of de.jungblut.math.minimize.GradientDescent

Copyright © 2018 www.massapicom. All rights reserved.
All source code are property of their respective owners. Java is a trademark of Sun Microsystems, Inc and owned by ORACLE Inc. Contact coftware#gmail.com.