Here is a sketch of specific modifications a user may wish to make on the code.
fzerothrough the wrappers
gradline. However, one may have better ways to line-minimize within an optimization routine or one may wish to supplement
fzerofor reasons of economy.
sg_algorithms without having to know too many details. Most unconstrained minimizations contain line searches (
t*v), Hessian applications (
H*v), and inner products (
v'w). To properly geometrize the algorithm, one has to do no more than replace these components with
invdgrad_CG(which uses a conjugate gradient method) and
invdgrad_MINRES(which uses a MINRES algorithm). Any matrix inversion routine which can stably invert black box linear operators could be used in place of these, and no other modifications to the code would be required.
Y*Bor complicated matrix functions of
B(Y)in the Jordan example could be saved in a workspace. Some of the terms in the computation of
dgradcould likewise be saved (in fact, our current implementation essentially recomputes the gradient every time the covariant Hessian is applied).
dtangentfunctions from the
dgradfunctions, one often finds that when these functions are in-lined, some terms cancel out and therefore do not need to be computed. One might also consider in-lining the
movefunction when performing line minimizations so that one could reuse data to make function evaluations and point updates faster.
FParameters, to hold data which could be put in the argument lists of all the functions. The user may wish to modify his/her code so that this data is passed explicitly to each function as individual arguments.
invdgrad, does not have any preconditioning step. Conjugate gradient can perform very poorly without preconditioning, and the preconditioning of black box linear operators like
dgradis a very difficult problem. The user might try to find a good preconditioner for