Krylov Documentation

This page contains the Krylov Package documentation.

The _bicgstab Module

pyamg.krylov._bicgstab.bicgstab(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]

Biconjugate Gradient Algorithm with Stabilization

Solves the linear system Ax = b. Left preconditioning is supported.

Parameters :

A : {array, matrix, sparse matrix, LinearOperator}

n x n, linear system to solve

b : {array, matrix}

right hand side, shape is (n,) or (n,1)

x0 : {array, matrix}

initial guess, default is a vector of zeros

tol : float

relative convergence tolerance, i.e. tol is scaled by ||r_0||_2

maxiter : int

maximum number of allowed iterations

xtype : type

dtype for the solution, default is automatic type detection

M : {array, matrix, sparse matrix, LinearOperator}

n x n, inverted preconditioner, i.e. solve M A A.H x = b.

callback : function

User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector

residuals : list

residuals has the residual norm history, including the initial residual, appended to it

Returns :

(xNew, info) :

xNew : an updated guess to the solution of Ax = b

info : halting status of bicgstab

0

successful exit

>0

convergence to tolerance not achieved, return iteration count instead.

<0

numerical breakdown, or illegal input

Notes

The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.

References

[R19]Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 231-234, 2003 http://www-users.cs.umn.edu/~saad/books.html

Examples

>>> from pyamg.krylov.bicgstab import bicgstab
>>> from pyamg.util.linalg import norm
>>> import numpy 
>>> from pyamg.gallery import poisson
>>> A = poisson((10,10))
>>> b = numpy.ones((A.shape[0],))
>>> (x,flag) = bicgstab(A,b, maxiter=2, tol=1e-8)
>>> print norm(b - A*x)
4.68163045309

The _cg Module

pyamg.krylov._cg.cg(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]

Conjugate Gradient algorithm

Solves the linear system Ax = b. Left preconditioning is supported.

Parameters :

A : {array, matrix, sparse matrix, LinearOperator}

n x n, linear system to solve

b : {array, matrix}

right hand side, shape is (n,) or (n,1)

x0 : {array, matrix}

initial guess, default is a vector of zeros

tol : float

relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M.

maxiter : int

maximum number of allowed iterations

xtype : type

dtype for the solution, default is automatic type detection

M : {array, matrix, sparse matrix, LinearOperator}

n x n, inverted preconditioner, i.e. solve M A x = b.

callback : function

User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector

residuals : list

residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm.

Returns :

(xNew, info) :

xNew : an updated guess to the solution of Ax = b

info : halting status of cg

0

successful exit

>0

convergence to tolerance not achieved, return iteration count instead.

<0

numerical breakdown, or illegal input

Notes

The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.

The residual in the preconditioner norm is both used for halting and returned in the residuals list.

References

[R20]Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 262-67, 2003 http://www-users.cs.umn.edu/~saad/books.html

Examples

>>> from pyamg.krylov.cg import cg
>>> from pyamg.util.linalg import norm
>>> import numpy 
>>> from pyamg.gallery import poisson
>>> A = poisson((10,10))
>>> b = numpy.ones((A.shape[0],))
>>> (x,flag) = cg(A,b, maxiter=2, tol=1e-8)
>>> print norm(b - A*x)
10.9370700187

The _cgne Module

pyamg.krylov._cgne.cgne(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]

Conjugate Gradient, Normal Error algorithm

Applies CG to the normal equations, A.H A x = b. Left preconditioning is supported. Note that unless A is well-conditioned, the use of CGNE is inadvisable

Parameters :

A : {array, matrix, sparse matrix, LinearOperator}

n x n, linear system to solve

b : {array, matrix}

right hand side, shape is (n,) or (n,1)

x0 : {array, matrix}

initial guess, default is a vector of zeros

tol : float

relative convergence tolerance, i.e. tol is scaled by ||r_0||_2

maxiter : int

maximum number of allowed iterations

xtype : type

dtype for the solution, default is automatic type detection

M : {array, matrix, sparse matrix, LinearOperator}

n x n, inverted preconditioner, i.e. solve M A A.H x = b.

callback : function

User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector

residuals : list

residuals has the residual norm history, including the initial residual, appended to it

Returns :

(xNew, info) :

xNew : an updated guess to the solution of Ax = b

info : halting status of cgne

0

successful exit

>0

convergence to tolerance not achieved, return iteration count instead.

<0

numerical breakdown, or illegal input

Notes

  • The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.

References

[R21]Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 276-7, 2003 http://www-users.cs.umn.edu/~saad/books.html

Examples

>>> from pyamg.krylov.cgne import cgne
>>> from pyamg.util.linalg import norm
>>> import numpy 
>>> from pyamg.gallery import poisson
>>> A = poisson((10,10))
>>> b = numpy.ones((A.shape[0],))
>>> (x,flag) = cgne(A,b, maxiter=2, tol=1e-8)
>>> print norm(b - A*x)
46.1547104367

The _cgnr Module

pyamg.krylov._cgnr.cgnr(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]

Conjugate Gradient, Normal Residual algorithm

Applies CG to the normal equations, A.H A x = b. Left preconditioning is supported. Note that unless A is well-conditioned, the use of CGNR is inadvisable

Parameters :

A : {array, matrix, sparse matrix, LinearOperator}

n x n, linear system to solve

b : {array, matrix}

right hand side, shape is (n,) or (n,1)

x0 : {array, matrix}

initial guess, default is a vector of zeros

tol : float

relative convergence tolerance, i.e. tol is scaled by ||r_0||_2

maxiter : int

maximum number of allowed iterations

xtype : type

dtype for the solution, default is automatic type detection

M : {array, matrix, sparse matrix, LinearOperator}

n x n, inverted preconditioner, i.e. solve M A.H A x = b.

callback : function

User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector

residuals : list

residuals has the residual norm history, including the initial residual, appended to it

Returns :

(xNew, info) :

xNew : an updated guess to the solution of Ax = b

info : halting status of cgnr

0

successful exit

>0

convergence to tolerance not achieved, return iteration count instead.

<0

numerical breakdown, or illegal input

Notes

The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.

References

[R22]Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 276-7, 2003 http://www-users.cs.umn.edu/~saad/books.html

Examples

>>> from pyamg.krylov.cgnr import cgnr
>>> from pyamg.util.linalg import norm
>>> import numpy 
>>> from pyamg.gallery import poisson
>>> A = poisson((10,10))
>>> b = numpy.ones((A.shape[0],))
>>> (x,flag) = cgnr(A,b, maxiter=2, tol=1e-8)
>>> print norm(b - A*x)
9.3910201849

The _fgmres Module

pyamg.krylov._fgmres.fgmres(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]

Flexible Generalized Minimum Residual Method (fGMRES)

fGMRES iteratively refines the initial solution guess to the system Ax = b. fGMRES is flexible in the sense that the right preconditioner (M) can vary from iteration to iteration.

Parameters :

A : {array, matrix, sparse matrix, LinearOperator}

n x n, linear system to solve

b : {array, matrix}

right hand side, shape is (n,) or (n,1)

x0 : {array, matrix}

initial guess, default is a vector of zeros

tol : float

relative convergence tolerance, i.e. tol is scaled by ||r_0||_2

restrt : {None, int}

  • if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
  • if None, do not restart GMRES, and max number of inner iterations is maxiter

maxiter : {None, int}

  • if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
  • if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations

xtype : type

dtype for the solution, default is automatic type detection

M : {array, matrix, sparse matrix, LinearOperator}

n x n, inverted preconditioner, i.e. solve A M x = b. M need not be stationary for fgmres

callback : function

User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current residual vector

residuals : list

residuals has the residual norm history, including the initial residual, appended to it

Returns :

(xNew, info) :

xNew : an updated guess to the solution of Ax = b

info : halting status of gmres

0

successful exit

>0

convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space.

<0

numerical breakdown, or illegal input

Notes

  • The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
  • fGMRES allows for non-stationary preconditioners, as opposed to GMRES
  • For robustness, Householder reflections are used to orthonormalize the Krylov Space Givens Rotations are used to provide the residual norm each iteration Flexibility implies that the right preconditioner, M or A.psolve, can vary from iteration to iteration

References

[R23]Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html

Examples

>>> from pyamg.krylov.fgmres import fgmres
>>> from pyamg.util.linalg import norm
>>> import numpy 
>>> from pyamg.gallery import poisson
>>> A = poisson((10,10))
>>> b = numpy.ones((A.shape[0],))
>>> (x,flag) = fgmres(A,b, maxiter=2, tol=1e-8)
>>> print norm(b - A*x)
6.5428213057

The _gmres Module

pyamg.krylov._gmres.gmres(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None, orthog='mgs', **kwargs)[source]
Generalized Minimum Residual Method (GMRES)
GMRES iteratively refines the initial solution guess to the system Ax = b
Parameters :

A : {array, matrix, sparse matrix, LinearOperator}

n x n, linear system to solve

b : {array, matrix}

right hand side, shape is (n,) or (n,1)

x0 : {array, matrix}

initial guess, default is a vector of zeros

tol : float

relative convergence tolerance, i.e. tol is scaled by the norm of the initial preconditioned residual

restrt : {None, int}

  • if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
  • if None, do not restart GMRES, and max number of inner iterations is maxiter

maxiter : {None, int}

  • if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
  • if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations

xtype : type

dtype for the solution, default is automatic type detection

M : {array, matrix, sparse matrix, LinearOperator}

n x n, inverted preconditioner, i.e. solve M A x = b.

callback : function

User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current preconditioned residual vector

residuals : list

residuals contains the preconditioned residual norm history, including the initial residual.

orthog : string

‘householder’ calls _gmres_householder which uses Householder reflections to find the orthogonal basis for the Krylov space. ‘mgs’ calls _gmres_mgs which uses modified Gram-Schmidt to find the orthogonal basis for the Krylov space

Returns :

(xNew, info) :

xNew : an updated guess to the solution of Ax = b

info : halting status of gmres

0

successful exit

>0

convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space.

<0

numerical breakdown, or illegal input

Notes

  • The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
  • The orthogonalization method, orthog=’householder’, is more robust than orthog=’mgs’, however for the majority of problems your problem will converge before ‘mgs’ loses orthogonality in your basis.
  • orthog=’householder’ has been more rigorously tested, and is therefore currently the default

References

[R24]Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html

Examples

>>> from pyamg.krylov import gmres
>>> from pyamg.util.linalg import norm
>>> import numpy 
>>> from pyamg.gallery import poisson
>>> A = poisson((10,10))
>>> b = numpy.ones((A.shape[0],))
>>> (x,flag) = gmres(A,b, maxiter=2, tol=1e-8)
>>> print norm(b - A*x)
6.5428213057

The _gmres_householder Module

pyamg.krylov._gmres_householder.gmres_householder(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]
Generalized Minimum Residual Method (GMRES)
GMRES iteratively refines the initial solution guess to the system Ax = b Householder reflections are used for orthogonalization
Parameters :

A : {array, matrix, sparse matrix, LinearOperator}

n x n, linear system to solve

b : {array, matrix}

right hand side, shape is (n,) or (n,1)

x0 : {array, matrix}

initial guess, default is a vector of zeros

tol : float

relative convergence tolerance, i.e. tol is scaled by the norm of the initial preconditioned residual

restrt : {None, int}

  • if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
  • if None, do not restart GMRES, and max number of inner iterations is maxiter

maxiter : {None, int}

  • if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
  • if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations

xtype : type

dtype for the solution, default is automatic type detection

M : {array, matrix, sparse matrix, LinearOperator}

n x n, inverted preconditioner, i.e. solve M A x = b.

callback : function

User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current preconditioned residual vector

residuals : list

residuals contains the preconditioned residual norm history, including the initial residual.

Returns :

(xNew, info) :

xNew : an updated guess to the solution of Ax = b

info : halting status of gmres

0

successful exit

>0

convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space.

<0

numerical breakdown, or illegal input

Notes

  • The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
  • For robustness, Householder reflections are used to orthonormalize the Krylov Space Givens Rotations are used to provide the residual norm each iteration

References

[R25]Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html

Examples

>>> from pyamg.krylov import gmres
>>> from pyamg.util.linalg import norm
>>> import numpy 
>>> from pyamg.gallery import poisson
>>> A = poisson((10,10))
>>> b = numpy.ones((A.shape[0],))
>>> (x,flag) = gmres(A,b, maxiter=2, tol=1e-8, orthog='householder')
>>> print norm(b - A*x)
6.5428213057

The _gmres_mgs Module

pyamg.krylov._gmres_mgs.gmres_mgs(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None, reorth=False)[source]
Generalized Minimum Residual Method (GMRES)
GMRES iteratively refines the initial solution guess to the system Ax = b Modified Gram-Schmidt version
Parameters :

A : {array, matrix, sparse matrix, LinearOperator}

n x n, linear system to solve

b : {array, matrix}

right hand side, shape is (n,) or (n,1)

x0 : {array, matrix}

initial guess, default is a vector of zeros

tol : float

relative convergence tolerance, i.e. tol is scaled by the norm of the initial preconditioned residual

restrt : {None, int}

  • if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
  • if None, do not restart GMRES, and max number of inner iterations is maxiter

maxiter : {None, int}

  • if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
  • if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations

xtype : type

dtype for the solution, default is automatic type detection

M : {array, matrix, sparse matrix, LinearOperator}

n x n, inverted preconditioner, i.e. solve M A x = b.

callback : function

User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current preconditioned residual vector

residuals : list

residuals contains the preconditioned residual norm history, including the initial residual.

reorth : boolean

If True, then a check is made whether to re-orthogonalize the Krylov space each GMRES iteration

Returns :

(xNew, info) :

xNew : an updated guess to the solution of Ax = b

info : halting status of gmres

0

successful exit

>0

convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space.

<0

numerical breakdown, or illegal input

Notes

  • The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
  • For robustness, modified Gram-Schmidt is used to orthogonalize the Krylov Space Givens Rotations are used to provide the residual norm each iteration

References

[R26]Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html
[R27]
    1. Kelley, http://www4.ncsu.edu/~ctk/matlab_roots.html

Examples

>>> from pyamg.krylov import gmres
>>> from pyamg.util.linalg import norm
>>> import numpy 
>>> from pyamg.gallery import poisson
>>> A = poisson((10,10))
>>> b = numpy.ones((A.shape[0],))
>>> (x,flag) = gmres(A,b, maxiter=2, tol=1e-8, orthog='mgs')
>>> print norm(b - A*x)
>>> 6.5428213057

The setup Module

pyamg.krylov.setup.configuration(parent_package='', top_path=None)[source]