LevenbergMarquardtFitter = class LevenbergMarquardtFitter(IterativeFitter) |
|
LevenbergMarquardtFitter(xdata, model, **kwargs)
Non-linear fitter using the Levenberg-Marquardt method.
Implementation of the Levenberg-Marquardt algorithm to fit the parameters
of a non-linear model. It is a gradient fitter which uses partial
derivatives to find the downhill gradient. Consequently it ends in the
first minimum it finds.
The original C-version stems from Numerical Recipes with some additions of my own.
This might be the third or fourth transcription of it.
Author Do Kester.
Examples
--------
# assume x and y are Double1d data arrays:
>>> x = numpy.arange( 100, dtype=float ) / 10
>>> y = numpy.arange( 100, dtype=float ) / 122 # make slope
>>> rg = RandomGauss( seed=12345L ) # Gaussian random number generator
>>> y += rg( numpy.asarray( 100, dtype=float ) ) * 0.2 # add noise
>>> y[Range( 9,12 )] += numpy.asarray( [5,10,7], dtype=float ) # make some peak
# define a model: GaussModel + background polynomial
>>> gauss = GaussModel( ) # Gaussian
>>> gauss += PolynomialModel( 1 ) # add linear background
>>> gauss.setParameters( numpy.asarray( [1,1,0.1,0,0], dtype=float ) ) # initial parameter guess
>>> print gauss.getNumberOfParameters( ) # 5 ( = 3 for Gauss + 2 for line )
>>> gauss.keepFixed( numpy.asarray( [2] ), numpy.asarray( [0.1], dtype=float ) ) # keep width fixed at 0.1
>>> lmfit = LevenbergMarquardtFitter( x, gauss )
>>> param = lmfit.fit( y )
>>> print param.length( ) # 4 ( = 5 - 1 fixed )
>>> stdev = lmfit.getStandardDeviation( ) # stdevs on the parameters
>>> chisq = lmfit.getChiSquared( )
>>> scale = lmfit.getScale( ) # noise scale
>>> yfit = lmfit.getResult( ) # fitted values
>>> yband = lmfit.monteCarloError( ) # 1 sigma confidence region
# for diagnostics ( or just for fun )
>>> lmfit = LevenbergMarquardtFitter( x, gauss )
>>> lmfit.setVerbose( 2 ) # report every 100th iteration
>>> plotter = IterationPlotter( ) # from BayesicFitting
>>> lmfit.setPlotter( plotter, 20 ) # make a plot every 20th iteration
>>> param = lmfit.fit( y )
Notes
-----
In case of problems look at the "Troubles" page in the documentation area.
Limitations
-----------
1. LMF is <b>not</b> guaranteed to find the global minimum.
2. The calculation of the evidence is an Gaussian approximation which is
only exact for linear models with a fixed scale.
Attributes
----------
xdata : array_like
vector of numbers as input for model
model : Model
the model to be fitted
lamda : float
to balance the curvature matrix (see Numerical Recipes) |
|
- Method resolution order:
- LevenbergMarquardtFitter
- IterativeFitter
- BaseFitter
- builtins.object
Constructor:
- LevenbergMarquardtFitter( xdata, model, **kwargs )
- Create a class, providing xdata and model.
Parameters
----------
xdata : array_like
vector of independent input values
model : Model
a model function to be fitted
kwargs : dict
Possibly includes keywords from
IterativeFitter : maxIter, tolerance, verbose
BaseFitter : map, keep, fixedScale
Methods defined here:
- checkLimits( fitpar, fitindex )
- chiSquaredExtra( data, params, weights=None )
- Add normalizing data to chisq.
- fit( data, weights=None, par0=None, keep=None, limits=None,
maxiter=None, tolerance=None, verbose=None, plot=False, accuracy=None,
callback=None)
- Return Model fitted to the data arrays.
It will calculate the hessian matrix and chisq.
Parameters
----------
data : array_like
the data vector to be fitted
weights : array_like
weights pertaining to the data
accuracy : float or array_like
accuracy of (individual) data
par0 : array_like
initial values for the parameters of the model
default: from model
keep : dict of {int : float}
dictionary of indices (int) of parameters to be kept at fixed value (float)
The values of `keep` are only valid for *this* fit
see also `LevenbergMarquardtFitter( ..., keep=dict )
limits : None or list of 2 floats or list of 2 array_like
None : no limits applied
[lo,hi] : low and high limits for all values of the parameters
[la,ha] : arrays of low and high limits for all values of the parameters
maxiter : int
max number of iterations. default=1000,
tolerance : float
absolute and relative tolrance. default=0.0001,
verbose : int
0 : silent
>0 : more output
default=1
plot : bool
plot the results
callback : callable
is called each iteration as
`val = callback( val )`
where val is the parameter list.
Raises
------
ConvergenceError if it stops when the tolerance has not yet been reached.
- getParameters()
- Return status of the fitter: parameters ( for debugging ).
Only for debugging; use Model.getParameters( ) otherwise
- trialfit( params, fi, data, weights, verbose, maxiter )
- # *************************************************************************
Methods inherited from IterativeFitter:
Methods inherited from BaseFitter:
- checkNan( ydata, weights=None, accuracy=None )
- chiSquared( ydata, params=None, weights=None )
- fitpostscript( ydata, plot=False )
- getCovarianceMatrix()
- getDesign( params=None, xdata=None, index=None )
- getEvidence( limits=None, noiseLimits=None )
- getHessian( params=None, weights=None, index=None )
- getInverseHessian( params=None, weights=None, index=None )
- getLogLikelihood( autoscale=False, var=1.0)
- getLogZ( limits=None, noiseLimits=None )
- getScale()
- getStandardDeviations()
- getVector( ydata, index=None )
- insertParameters( fitpar, index=None, into=None )
- keepFixed( keep=None )
- limitsFit( fitmethod, ydata, weights=None, keep=None )
- makeVariance( scale=None )
- modelFit( ydata, weights=None, keep=None )
- monteCarloError( xdata=None, monteCarlo=None )
- normalize( normdfdp, normdata, weight=1.0)
- plotResult( xdata=None, ydata=None, model=None, residuals=True, confidence=False, show=True )
- setMinimumScale( scale=0)
|
|