
    (ph*p                         S r S/rSSKrSSKJr  SSKJrJrJ	r	  SSK
JrJr  SSKJr  SS	KJr  / S
Qr  SS jrS r   SS jr     SS jrS rS rS rS rS rS rS rg)zp
Unified interfaces to root finding algorithms.

Functions
---------
- root : find a root of a vector function.
root    N)warn   )
MemoizeJacOptimizeResult_check_unknown_options)
_root_hybrleastsq)_root_df_sane)_nonlin)
hybrlmbroyden1broyden2andersonlinearmixingdiagbroydenexcitingmixingkrylovdf-sanec                   ^ ^
 U
U 4S jm
ST
l         [        U[        5      (       d  U4nUR                  5       nUc  0 nUb  US;   a  [	        SU S3[
        SS9  [        U5      (       d0  US;   a*  [        U5      (       a  [        T 5      m T R                  nOSnUb  [        U5      nUS;   a  UR                  S	U5        OUS
;   a  UR                  SU5        OxUS;   ar  UR                  S	U5        UR                  S[        R                  5        UR                  S[        R                  5        UR                  S[        R                  5        US:X  a  [        T
U4X$S.UD6n	OgUS:X  a  [        T
U4X$S.UD6n	OQUS:X  a  [!        XC5        [#        T
U4X&S.UD6n	O0US;   a  [!        XC5        [%        T
U4X$XS.UD6n	O['        SU 35      eT
R                   U	l         U	$ )a  
Find a root of a vector function.

Parameters
----------
fun : callable
    A vector function to find a root of.

    Suppose the callable has signature ``f0(x, *my_args, **my_kwargs)``, where
    ``my_args`` and ``my_kwargs`` are required positional and keyword arguments.
    Rather than passing ``f0`` as the callable, wrap it to accept
    only ``x``; e.g., pass ``fun=lambda x: f0(x, *my_args, **my_kwargs)`` as the
    callable, where ``my_args`` (tuple) and ``my_kwargs`` (dict) have been
    gathered before invoking this function.
x0 : ndarray
    Initial guess.
args : tuple, optional
    Extra arguments passed to the objective function and its Jacobian.
method : str, optional
    Type of solver. Should be one of

    - 'hybr'             :ref:`(see here) <optimize.root-hybr>`
    - 'lm'               :ref:`(see here) <optimize.root-lm>`
    - 'broyden1'         :ref:`(see here) <optimize.root-broyden1>`
    - 'broyden2'         :ref:`(see here) <optimize.root-broyden2>`
    - 'anderson'         :ref:`(see here) <optimize.root-anderson>`
    - 'linearmixing'     :ref:`(see here) <optimize.root-linearmixing>`
    - 'diagbroyden'      :ref:`(see here) <optimize.root-diagbroyden>`
    - 'excitingmixing'   :ref:`(see here) <optimize.root-excitingmixing>`
    - 'krylov'           :ref:`(see here) <optimize.root-krylov>`
    - 'df-sane'          :ref:`(see here) <optimize.root-dfsane>`

jac : bool or callable, optional
    If `jac` is a Boolean and is True, `fun` is assumed to return the
    value of Jacobian along with the objective function. If False, the
    Jacobian will be estimated numerically.
    `jac` can also be a callable returning the Jacobian of `fun`. In
    this case, it must accept the same arguments as `fun`.
tol : float, optional
    Tolerance for termination. For detailed control, use solver-specific
    options.
callback : function, optional
    Optional callback function. It is called on every iteration as
    ``callback(x, f)`` where `x` is the current solution and `f`
    the corresponding residual. For all methods but 'hybr' and 'lm'.
options : dict, optional
    A dictionary of solver options. E.g., `xtol` or `maxiter`, see
    :obj:`show_options()` for details.

Returns
-------
sol : OptimizeResult
    The solution represented as a ``OptimizeResult`` object.
    Important attributes are: ``x`` the solution array, ``success`` a
    Boolean flag indicating if the algorithm exited successfully and
    ``message`` which describes the cause of the termination. See
    `OptimizeResult` for a description of other attributes.

See also
--------
show_options : Additional options accepted by the solvers

Notes
-----
This section describes the available solvers that can be selected by the
'method' parameter. The default method is *hybr*.

Method *hybr* uses a modification of the Powell hybrid method as
implemented in MINPACK [1]_.

Method *lm* solves the system of nonlinear equations in a least squares
sense using a modification of the Levenberg-Marquardt algorithm as
implemented in MINPACK [1]_.

Method *df-sane* is a derivative-free spectral method. [3]_

Methods *broyden1*, *broyden2*, *anderson*, *linearmixing*,
*diagbroyden*, *excitingmixing*, *krylov* are inexact Newton methods,
with backtracking or full line searches [2]_. Each method corresponds
to a particular Jacobian approximations.

- Method *broyden1* uses Broyden's first Jacobian approximation, it is
  known as Broyden's good method.
- Method *broyden2* uses Broyden's second Jacobian approximation, it
  is known as Broyden's bad method.
- Method *anderson* uses (extended) Anderson mixing.
- Method *Krylov* uses Krylov approximation for inverse Jacobian. It
  is suitable for large-scale problem.
- Method *diagbroyden* uses diagonal Broyden Jacobian approximation.
- Method *linearmixing* uses a scalar Jacobian approximation.
- Method *excitingmixing* uses a tuned diagonal Jacobian
  approximation.

.. warning::

    The algorithms implemented for methods *diagbroyden*,
    *linearmixing* and *excitingmixing* may be useful for specific
    problems, but whether they will work may depend strongly on the
    problem.

.. versionadded:: 0.11.0

References
----------
.. [1] More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom.
   1980. User Guide for MINPACK-1.
.. [2] C. T. Kelley. 1995. Iterative Methods for Linear and Nonlinear
   Equations. Society for Industrial and Applied Mathematics.
   <https://archive.siam.org/books/kelley/fr16/>
.. [3] W. La Cruz, J.M. Martinez, M. Raydan. Math. Comp. 75, 1429 (2006).

Examples
--------
The following functions define a system of nonlinear equations and its
jacobian.

>>> import numpy as np
>>> def fun(x):
...     return [x[0]  + 0.5 * (x[0] - x[1])**3 - 1.0,
...             0.5 * (x[1] - x[0])**3 + x[1]]

>>> def jac(x):
...     return np.array([[1 + 1.5 * (x[0] - x[1])**2,
...                       -1.5 * (x[0] - x[1])**2],
...                      [-1.5 * (x[1] - x[0])**2,
...                       1 + 1.5 * (x[1] - x[0])**2]])

A solution can be obtained as follows.

>>> from scipy import optimize
>>> sol = optimize.root(fun, [0, 0], jac=jac, method='hybr')
>>> sol.x
array([ 0.8411639,  0.1588361])

**Large problem**

Suppose that we needed to solve the following integrodifferential
equation on the square :math:`[0,1]\times[0,1]`:

.. math::

   \nabla^2 P = 10 \left(\int_0^1\int_0^1\cosh(P)\,dx\,dy\right)^2

with :math:`P(x,1) = 1` and :math:`P=0` elsewhere on the boundary of
the square.

The solution can be found using the ``method='krylov'`` solver:

>>> from scipy import optimize
>>> # parameters
>>> nx, ny = 75, 75
>>> hx, hy = 1./(nx-1), 1./(ny-1)

>>> P_left, P_right = 0, 0
>>> P_top, P_bottom = 1, 0

>>> def residual(P):
...    d2x = np.zeros_like(P)
...    d2y = np.zeros_like(P)
...
...    d2x[1:-1] = (P[2:]   - 2*P[1:-1] + P[:-2]) / hx/hx
...    d2x[0]    = (P[1]    - 2*P[0]    + P_left)/hx/hx
...    d2x[-1]   = (P_right - 2*P[-1]   + P[-2])/hx/hx
...
...    d2y[:,1:-1] = (P[:,2:] - 2*P[:,1:-1] + P[:,:-2])/hy/hy
...    d2y[:,0]    = (P[:,1]  - 2*P[:,0]    + P_bottom)/hy/hy
...    d2y[:,-1]   = (P_top   - 2*P[:,-1]   + P[:,-2])/hy/hy
...
...    return d2x + d2y - 10*np.cosh(P).mean()**2

>>> guess = np.zeros((nx, ny), float)
>>> sol = optimize.root(residual, guess, method='krylov')
>>> print('Residual: %g' % abs(residual(sol.x)).max())
Residual: 5.7972e-06  # may vary

>>> import matplotlib.pyplot as plt
>>> x, y = np.mgrid[0:1:(nx*1j), 0:1:(ny*1j)]
>>> plt.pcolormesh(x, y, sol.x, shading='gouraud')
>>> plt.colorbar()
>>> plt.show()

c                  8   > T=R                   S-  sl         T" U 6 $ zK
Wrapped `func` to track the number of times
the function has been called.
r   )nfev)fargs_wrapped_funfuns    G/var/www/html/venv/lib/python3.13/site-packages/scipy/optimize/_root.pyr   root.<locals>._wrapped_fun   s     
 	QE{    r   N)r   r   Method z does not accept callback.   
stacklevelxtol)r   ftolr   r   r   r   r   r   r   xatolfatolr   )argsjacr   r   )r*   callback)r*   r+   _method	_callbackzUnknown solver )r   
isinstancetuplelowerr   RuntimeWarningcallableboolr   
derivativedict
setdefaultnpinfr	   _root_leastsq_warn_jac_unusedr   _root_nonlin_solve
ValueError)r   x0r*   methodr+   tolr,   optionsmethsolr   s   `         @r   r   r      s   p LdE""w<<>D 6wvh89	+ C==T^399S/C..CC w->!vs+\!vs+ A Avs+w/vrvv.w/v~rIII	L"L4LGL		%L" '4 '%'	 = 
=% r ,)-,#*, ?6(344  CHJr    c                 2    U b  [        SU S3[        SS9  g g )Nr!   z! does not use the jacobian (jac).r"   r#   )r   r2   )r+   r?   s     r   r;   r;     s%    
wvh?@	+ r    c                    ^ ^ SmU U4S jn[        U5        [        XUUSXEXgXXS9u  pnnn[        UUUUS;   UUR                  S5      SS9nUR	                  U5        TUl        U$ )	aR  
Solve for least squares with Levenberg-Marquardt

Options
-------
col_deriv : bool
    non-zero to specify that the Jacobian function computes derivatives
    down the columns (faster, because there is no transpose operation).
ftol : float
    Relative error desired in the sum of squares.
xtol : float
    Relative error desired in the approximate solution.
gtol : float
    Orthogonality desired between the function vector and the columns
    of the Jacobian.
maxiter : int
    The maximum number of calls to the function. If zero, then
    100*(N+1) is the maximum where N is the number of elements in x0.
eps : float
    A suitable step length for the forward-difference approximation of
    the Jacobian (for Dfun=None). If `eps` is less than the machine
    precision, it is assumed that the relative errors in the functions
    are of the order of the machine precision.
factor : float
    A parameter determining the initial step bound
    (``factor * || diag * x||``). Should be in interval ``(0.1, 100)``.
diag : sequence
    N positive entries that serve as a scale factors for the variables.
r   c                     > TS-  mT" U 6 $ r    )r   r   r   s    r   r   #_root_leastsq.<locals>._wrapped_fun9  s     		E{r    T)r*   Dfunfull_output	col_derivr%   r&   gtolmaxfevepsfcnfactordiag)r   r"         fvecr   )xmessagestatussuccesscov_xr   r?   )r   r
   r   popupdater   )r   r>   r*   r+   rK   r%   r&   rL   maxiterepsrO   rP   unknown_optionsr   rT   rX   infomsgierrC   r   s   `                   @r   r:   r:     s    B D ?+&|d,/T1:,0.5.4 AAdC 1c#!$!4E!XXf-d<C JJtCHJr    c                   ^ ^ [        U5        U
nU	nUnUnUnUc
  [        5       n[        R                  [        R                  [        R
                  [        R                  [        R                  [        R                  [        R                  S.U   nT(       a  USL a  UU 4S jnO
UU 4S jnOT n[        R                  " UUU" S0 UD6UUUUUUUUUUSSS9u  nn[        UUS9nUR                  U5        U$ )	Nr'   Tc                    > T" U /TQ76 S   $ )Nr   rG   rT   r*   r   s    r   f_root_nonlin_solve.<locals>.fl  s    1}t}Q''r    c                    > T" U /TQ76 $ )NrG   rc   s    r   rd   re   o  s    1}t}$r    F)jacobianiterverboser[   f_tolf_rtolx_tolx_rtoltol_normline_searchr,   rJ   raise_exception)rT   r?   rG   )r   r6   nonlinBroydenFirstBroydenSecondAndersonLinearMixingDiagBroydenExcitingMixingKrylovJacobiannonlin_solver   rZ   )r   r>   r*   r+   r.   r-   nitdispr[   r&   r)   r%   r(   rn   ro   jac_optionsr]   rj   rk   rl   rm   ri   rg   rd   rT   r^   rC   s   ` `                        r   r<   r<   Q  s     ?+EFEFGf"//"00"OO & 3 3%11"("7"7 // H $;(% !!!R(2I[2I'*G*1)/u)/(.9+4$279GAt 1W
-CJJtJr    c                      g)a	  
Options
-------
nit : int, optional
    Number of iterations to make. If omitted (default), make as many
    as required to meet tolerances.
disp : bool, optional
    Print status to stdout on every iteration.
maxiter : int, optional
    Maximum number of iterations to make.
ftol : float, optional
    Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
    Absolute tolerance (in max-norm) for the residual.
    If omitted, default is 6e-6.
xtol : float, optional
    Relative minimum step size. If omitted, not used.
xatol : float, optional
    Absolute minimum step size, as determined from the Jacobian
    approximation. If the step size is smaller than this, optimization
    is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
    Norm to use in convergence check. Default is the maximum norm.
line_search : {None, 'armijo' (default), 'wolfe'}, optional
    Which type of a line search to use to determine the step size in
    the direction given by the Jacobian approximation. Defaults to
    'armijo'.
jac_options : dict, optional
    Options for the respective Jacobian approximation.

    alpha : float, optional
        Initial guess for the Jacobian is (-1/alpha).
    reduction_method : str or tuple, optional
        Method used in ensuring that the rank of the Broyden
        matrix stays low. Can either be a string giving the
        name of the method, or a tuple of the form ``(method,
        param1, param2, ...)`` that gives the name of the
        method and values for additional parameters.

        Methods available:

        - ``restart``: drop all matrix columns. Has no extra parameters.
        - ``simple``: drop oldest matrix column. Has no extra parameters.
        - ``svd``: keep only the most significant SVD components.
          Takes an extra parameter, ``to_retain``, which determines the
          number of SVD components to retain when rank reduction is done.
          Default is ``max_rank - 2``.

    max_rank : int, optional
        Maximum rank for the Broyden matrix.
        Default is infinity (i.e., no rank reduction).

Examples
--------
>>> def func(x):
...     return np.cos(x) + x[::-1] - [1, 2, 3, 4]
...
>>> from scipy import optimize
>>> res = optimize.root(func, [1, 1, 1, 1], method='broyden1', tol=1e-14)
>>> x = res.x
>>> x
array([4.04674914, 3.91158389, 2.71791677, 1.61756251])
>>> np.cos(x) + x[::-1]
array([1., 2., 3., 4.])

NrG   rG   r    r   _root_broyden1_docr~     s    F 	r    c                      g)ak  
Options
-------
nit : int, optional
    Number of iterations to make. If omitted (default), make as many
    as required to meet tolerances.
disp : bool, optional
    Print status to stdout on every iteration.
maxiter : int, optional
    Maximum number of iterations to make.
ftol : float, optional
    Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
    Absolute tolerance (in max-norm) for the residual.
    If omitted, default is 6e-6.
xtol : float, optional
    Relative minimum step size. If omitted, not used.
xatol : float, optional
    Absolute minimum step size, as determined from the Jacobian
    approximation. If the step size is smaller than this, optimization
    is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
    Norm to use in convergence check. Default is the maximum norm.
line_search : {None, 'armijo' (default), 'wolfe'}, optional
    Which type of a line search to use to determine the step size in
    the direction given by the Jacobian approximation. Defaults to
    'armijo'.
jac_options : dict, optional
    Options for the respective Jacobian approximation.

    alpha : float, optional
        Initial guess for the Jacobian is (-1/alpha).
    reduction_method : str or tuple, optional
        Method used in ensuring that the rank of the Broyden
        matrix stays low. Can either be a string giving the
        name of the method, or a tuple of the form ``(method,
        param1, param2, ...)`` that gives the name of the
        method and values for additional parameters.

        Methods available:

        - ``restart``: drop all matrix columns. Has no extra parameters.
        - ``simple``: drop oldest matrix column. Has no extra parameters.
        - ``svd``: keep only the most significant SVD components.
          Takes an extra parameter, ``to_retain``, which determines the
          number of SVD components to retain when rank reduction is done.
          Default is ``max_rank - 2``.

    max_rank : int, optional
        Maximum rank for the Broyden matrix.
        Default is infinity (i.e., no rank reduction).
NrG   rG   r    r   _root_broyden2_docr     s    j 	r    c                      g)a  
Options
-------
nit : int, optional
    Number of iterations to make. If omitted (default), make as many
    as required to meet tolerances.
disp : bool, optional
    Print status to stdout on every iteration.
maxiter : int, optional
    Maximum number of iterations to make.
ftol : float, optional
    Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
    Absolute tolerance (in max-norm) for the residual.
    If omitted, default is 6e-6.
xtol : float, optional
    Relative minimum step size. If omitted, not used.
xatol : float, optional
    Absolute minimum step size, as determined from the Jacobian
    approximation. If the step size is smaller than this, optimization
    is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
    Norm to use in convergence check. Default is the maximum norm.
line_search : {None, 'armijo' (default), 'wolfe'}, optional
    Which type of a line search to use to determine the step size in
    the direction given by the Jacobian approximation. Defaults to
    'armijo'.
jac_options : dict, optional
    Options for the respective Jacobian approximation.

    alpha : float, optional
        Initial guess for the Jacobian is (-1/alpha).
    M : float, optional
        Number of previous vectors to retain. Defaults to 5.
    w0 : float, optional
        Regularization parameter for numerical stability.
        Compared to unity, good values of the order of 0.01.
NrG   rG   r    r   _root_anderson_docr     s    N 	r    c                      ga  
Options
-------
nit : int, optional
    Number of iterations to make. If omitted (default), make as many
    as required to meet tolerances.
disp : bool, optional
    Print status to stdout on every iteration.
maxiter : int, optional
    Maximum number of iterations to make.
ftol : float, optional
    Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
    Absolute tolerance (in max-norm) for the residual.
    If omitted, default is 6e-6.
xtol : float, optional
    Relative minimum step size. If omitted, not used.
xatol : float, optional
    Absolute minimum step size, as determined from the Jacobian
    approximation. If the step size is smaller than this, optimization
    is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
    Norm to use in convergence check. Default is the maximum norm.
line_search : {None, 'armijo' (default), 'wolfe'}, optional
    Which type of a line search to use to determine the step size in
    the direction given by the Jacobian approximation. Defaults to
    'armijo'.
jac_options : dict, optional
    Options for the respective Jacobian approximation.

    alpha : float, optional
        initial guess for the jacobian is (-1/alpha).
NrG   rG   r    r   _root_linearmixing_docr   '      D 	r    c                      gr   rG   rG   r    r   _root_diagbroyden_docr   K  r   r    c                      g)ak  
Options
-------
nit : int, optional
    Number of iterations to make. If omitted (default), make as many
    as required to meet tolerances.
disp : bool, optional
    Print status to stdout on every iteration.
maxiter : int, optional
    Maximum number of iterations to make.
ftol : float, optional
    Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
    Absolute tolerance (in max-norm) for the residual.
    If omitted, default is 6e-6.
xtol : float, optional
    Relative minimum step size. If omitted, not used.
xatol : float, optional
    Absolute minimum step size, as determined from the Jacobian
    approximation. If the step size is smaller than this, optimization
    is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
    Norm to use in convergence check. Default is the maximum norm.
line_search : {None, 'armijo' (default), 'wolfe'}, optional
    Which type of a line search to use to determine the step size in
    the direction given by the Jacobian approximation. Defaults to
    'armijo'.
jac_options : dict, optional
    Options for the respective Jacobian approximation.

    alpha : float, optional
        Initial Jacobian approximation is (-1/alpha).
    alphamax : float, optional
        The entries of the diagonal Jacobian are kept in the range
        ``[alpha, alphamax]``.
NrG   rG   r    r   _root_excitingmixing_docr   o  s    J 	r    c                      g)a`  
Options
-------
nit : int, optional
    Number of iterations to make. If omitted (default), make as many
    as required to meet tolerances.
disp : bool, optional
    Print status to stdout on every iteration.
maxiter : int, optional
    Maximum number of iterations to make.
ftol : float, optional
    Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
    Absolute tolerance (in max-norm) for the residual.
    If omitted, default is 6e-6.
xtol : float, optional
    Relative minimum step size. If omitted, not used.
xatol : float, optional
    Absolute minimum step size, as determined from the Jacobian
    approximation. If the step size is smaller than this, optimization
    is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
    Norm to use in convergence check. Default is the maximum norm.
line_search : {None, 'armijo' (default), 'wolfe'}, optional
    Which type of a line search to use to determine the step size in
    the direction given by the Jacobian approximation. Defaults to
    'armijo'.
jac_options : dict, optional
    Options for the respective Jacobian approximation.

    rdiff : float, optional
        Relative step size to use in numerical differentiation.
    method : str or callable, optional
        Krylov method to use to approximate the Jacobian.  Can be a string,
        or a function implementing the same interface as the iterative
        solvers in `scipy.sparse.linalg`. If a string, needs to be one of:
        ``'lgmres'``, ``'gmres'``, ``'bicgstab'``, ``'cgs'``, ``'minres'``,
        ``'tfqmr'``.

        The default is `scipy.sparse.linalg.lgmres`.
    inner_M : LinearOperator or InverseJacobian
        Preconditioner for the inner Krylov iteration.
        Note that you can use also inverse Jacobians as (adaptive)
        preconditioners. For example,

        >>> jac = BroydenFirst()
        >>> kjac = KrylovJacobian(inner_M=jac.inverse).

        If the preconditioner has a method named 'update', it will
        be called as ``update(x, f)`` after each nonlinear step,
        with ``x`` giving the current point, and ``f`` the current
        function value.
    inner_rtol, inner_atol, inner_callback, ...
        Parameters to pass on to the "inner" Krylov solver.

        For a full list of options, see the documentation for the
        solver you are using. By default this is `scipy.sparse.linalg.lgmres`.
        If the solver has been overridden through `method`, see the documentation
        for that solver instead.
        To use an option for that solver, prepend ``inner_`` to it.
        For example, to control the ``rtol`` argument to the solver,
        set the `inner_rtol` option here.

    outer_k : int, optional
        Size of the subspace kept across LGMRES nonlinear
        iterations.

        See `scipy.sparse.linalg.lgmres` for details.
NrG   rG   r    r   _root_krylov_docr     s    L 	r    )rG   r   NNNN)
rG   Nr   J P>r           r   r   d   N)rG   NNNNFNNNNNNarmijoN)__doc____all__numpyr8   warningsr   	_optimizer   r   r   _minpack_pyr	   r
   	_spectralr    r   rq   ROOT_METHODSr   r;   r:   r<   r~   r   r   r   r   r   r   rG   r    r   <module>r      s    (   I I , $ 
 HLup+ )-6AAE7t .2/359?CHL	-^C	L5	p'	R"	H"	H%	NF	r    