News

Discover a powerful line search method for unconstrained optimization problems. Achieve global convergence and R-linear convergence for nonconvex and convex functions. No line search rule needed.
Implementation of numerical optimization algorithms in MATLAB, including derivative-free and gradient-based methods for unconstrained problems, and projection techniques for constrained optimization.
Discover new iteration schemes for unconstrained optimization using a predictor-corrector tactic. This paper analyzes global convergence and compares performance with DFP and BFGS updates.
The file nonlinear_equations_library.jl is also available, containing the implementation of some functions of the set of problems proposed by Moré-Garbow-Hillstrom. The algorithms were written in the ...
Conjugate gradient methods form a class of iterative algorithms that are highly effective for solving large‐scale unconstrained optimisation problems. They achieve efficiency by constructing ...
In this paper we present two algorithms for LC¹ unconstrained optimization problems which use the second order Dini upper directional derivative. These methods are simple and easy to perform. We ...
The computational method of unconstrained optimization problem is an important research topic in the field of numerical computation. It is of great significance to solve the problem of unconstrained ...
In this paper we provide a systematic comparison of the following population-based optimization techniques: Genetic Algorithm (GA), Evolution Strategy (ES), Cuckoo Search (CS), Differential Evolution ...