
    (phTw                     F   S r SSKrSSKrSSKJr  SSKJrJrJ	r	J
r
  SSKJrJrJr  SSKJr  SSKJr  SS	KJr  S
r/ SQr " S S\5      rSS jrSS jrSS jrSS jr\" S5      SSS.S jj5       rS rS r S r!\ \\!S.r"S r#S r$\#\$S.r%\" S5        S SS.S jj5       r&g)!a  
K-means clustering and vector quantization (:mod:`scipy.cluster.vq`)
====================================================================

Provides routines for k-means clustering, generating code books
from k-means models and quantizing vectors by comparing them with
centroids in a code book.

.. autosummary::
   :toctree: generated/

   whiten -- Normalize a group of observations so each feature has unit variance
   vq -- Calculate code book membership of a set of observation vectors
   kmeans -- Perform k-means on a set of observation vectors forming k clusters
   kmeans2 -- A different implementation of k-means with more methods
           -- for initializing centroids

Background information
----------------------
The k-means algorithm takes as input the number of clusters to
generate, k, and a set of observation vectors to cluster. It
returns a set of centroids, one for each of the k clusters. An
observation vector is classified with the cluster number or
centroid index of the centroid closest to it.

A vector v belongs to cluster i if it is closer to centroid i than
any other centroid. If v belongs to i, we say centroid i is the
dominating centroid of v. The k-means algorithm tries to
minimize distortion, which is defined as the sum of the squared distances
between each observation vector and its dominating centroid.
The minimization is achieved by iteratively reclassifying
the observations into clusters and recalculating the centroids until
a configuration is reached in which the centroids are stable. One can
also define a maximum number of iterations.

Since vector quantization is a natural application for k-means,
information theory terminology is often used. The centroid index
or cluster index is also referred to as a "code" and the table
mapping codes to centroids and, vice versa, is often referred to as a
"code book". The result of k-means, a set of centroids, can be
used to quantize vectors. Quantization aims to find an encoding of
vectors that reduces the expected distortion.

All routines expect obs to be an M by N array, where the rows are
the observation vectors. The codebook is a k by N array, where the
ith row is the centroid of code word i. The observation vectors
and centroids have the same feature dimension.

As an example, suppose we wish to compress a 24-bit color image
(each pixel is represented by one byte for red, one for blue, and
one for green) before sending it over the web. By using a smaller
8-bit encoding, we can reduce the amount of data by two
thirds. Ideally, the colors for each of the 256 possible 8-bit
encoding values should be chosen to minimize distortion of the
color. Running k-means with k=256 generates a code book of 256
codes, which fills up all possible 8-bit sequences. Instead of
sending a 3-byte value for each pixel, the 8-bit centroid index
(or code word) of the dominating centroid is transmitted. The code
book is also sent over the wire so each 8-bit code can be
translated back to a 24-bit pixel value representation. If the
image of interest was of an ocean, we would expect many 24-bit
blues to be represented by 8-bit codes. If it was an image of a
human face, more flesh-tone colors would be represented in the
code book.

    N)deque)_asarrayarray_namespacexp_sizexp_copy)check_random_staterng_integers_transition_to_rng)array_api_extra)cdist   )_vqrestructuredtext)whitenvqkmeanskmeans2c                       \ rS rSrSrg)ClusterErrorU    N)__name__
__module____qualname____firstlineno____static_attributes__r       C/var/www/html/venv/lib/python3.13/site-packages/scipy/cluster/vq.pyr   r   U   s    r   r   c                     [        U 5      n[        XUS9n UR                  U SS9nUS:H  nUR                  U5      (       a  SX4'   [        R
                  " S[        SS9  X-  $ )a  
Normalize a group of observations on a per feature basis.

Before running k-means, it is beneficial to rescale each feature
dimension of the observation set by its standard deviation (i.e. "whiten"
it - as in "white noise" where each frequency has equal power).
Each feature is divided by its standard deviation across all observations
to give it unit variance.

Parameters
----------
obs : ndarray
    Each row of the array is an observation.  The
    columns are the features seen during each observation.

    >>> #         f0    f1    f2
    >>> obs = [[  1.,   1.,   1.],  #o0
    ...        [  2.,   2.,   2.],  #o1
    ...        [  3.,   3.,   3.],  #o2
    ...        [  4.,   4.,   4.]]  #o3

check_finite : bool, optional
    Whether to check that the input matrices contain only finite numbers.
    Disabling may give a performance gain, but may result in problems
    (crashes, non-termination) if the inputs do contain infinities or NaNs.
    Default: True

Returns
-------
result : ndarray
    Contains the values in `obs` scaled by the standard deviation
    of each column.

Examples
--------
>>> import numpy as np
>>> from scipy.cluster.vq import whiten
>>> features  = np.array([[1.9, 2.3, 1.7],
...                       [1.5, 2.5, 2.2],
...                       [0.8, 0.6, 1.7,]])
>>> whiten(features)
array([[ 4.17944278,  2.69811351,  7.21248917],
       [ 3.29956009,  2.93273208,  9.33380951],
       [ 1.75976538,  0.7038557 ,  7.21248917]])

)check_finitexpr   axis      ?zWSome columns have standard deviation zero. The values of these columns will not change.   
stacklevel)r   r   stdanywarningswarnRuntimeWarning)obsr    r!   std_devzero_std_masks        r   r   r   Y   sn    ^ 
	B
3b
9CffSqf!GqLM	vvm!$ E$	4 =r   c                    [        X5      n[        XUS9n [        XUS9nUR                  X5      nUR                  XSS9nUR                  XSS9nUR	                  USS9(       aj  [
        R                  " U5      n[
        R                  " U5      n[        R                  " XV5      nUR                  US   5      UR                  US   5      4$ [        XSS9$ )	a  
Assign codes from a code book to observations.

Assigns a code from a code book to each observation. Each
observation vector in the 'M' by 'N' `obs` array is compared with the
centroids in the code book and assigned the code of the closest
centroid.

The features in `obs` should have unit variance, which can be
achieved by passing them through the whiten function. The code
book can be created with the k-means algorithm or a different
encoding algorithm.

Parameters
----------
obs : ndarray
    Each row of the 'M' x 'N' array is an observation. The columns are
    the "features" seen during each observation. The features must be
    whitened first using the whiten function or something equivalent.
code_book : ndarray
    The code book is usually generated using the k-means algorithm.
    Each row of the array holds a different code, and the columns are
    the features of the code.

     >>> #              f0    f1    f2   f3
     >>> code_book = [
     ...             [  1.,   2.,   3.,   4.],  #c0
     ...             [  1.,   2.,   3.,   4.],  #c1
     ...             [  1.,   2.,   3.,   4.]]  #c2

check_finite : bool, optional
    Whether to check that the input matrices contain only finite numbers.
    Disabling may give a performance gain, but may result in problems
    (crashes, non-termination) if the inputs do contain infinities or NaNs.
    Default: True

Returns
-------
code : ndarray
    A length M array holding the code book index for each observation.
dist : ndarray
    The distortion (distance) between the observation and its nearest
    code.

Examples
--------
>>> import numpy as np
>>> from scipy.cluster.vq import vq
>>> code_book = np.array([[1., 1., 1.],
...                       [2., 2., 2.]])
>>> features  = np.array([[1.9, 2.3, 1.7],
...                       [1.5, 2.5, 2.2],
...                       [0.8, 0.6, 1.7]])
>>> vq(features, code_book)
(array([1, 1, 0], dtype=int32), array([0.43588989, 0.73484692, 0.83066239]))

r!   r    F)copyzreal floating)kindr   r   r    )
r   r   result_typeastypeisdtypenpasarrayr   r   py_vq)r-   	code_bookr    r!   ctc_obsc_code_bookresults           r   r   r      s    t 
	(B
3L
9CEI		'BIIcEI*E))I)6K	zz"?z+

5!jj-+zz&)$bjj&;;;e44r   c                 p   [        X5      n[        XUS9n [        XUS9nU R                  UR                  :w  a  [        S5      eU R                  S:X  a&  U SS2UR                  4   n USS2UR                  4   nUR                  [        X5      5      nUR                  USS9nUR                  USS9nXV4$ )aV  Python version of vq algorithm.

The algorithm computes the Euclidean distance between each
observation and every frame in the code_book.

Parameters
----------
obs : ndarray
    Expects a rank 2 array. Each row is one observation.
code_book : ndarray
    Code book to use. Same format than obs. Should have same number of
    features (e.g., columns) than obs.
check_finite : bool, optional
    Whether to check that the input matrices contain only finite numbers.
    Disabling may give a performance gain, but may result in problems
    (crashes, non-termination) if the inputs do contain infinities or NaNs.
    Default: True

Returns
-------
code : ndarray
    code[i] gives the label of the ith obversation; its code is
    code_book[code[i]].
mind_dist : ndarray
    min_dist[i] gives the distance between the ith observation and its
    corresponding code.

Notes
-----
This function is slower than the C version but works for
all input types. If the inputs have the wrong types for the
C versions of the function, this one is called as a last resort.

It is about 20 times slower than the C version.

r1   z3Observation and code_book should have the same rankr   Nr"   )	r   r   ndim
ValueErrornewaxisr9   r   argminmin)r-   r;   r    r!   distcodemin_dists          r   r:   r:      s    J 
	(B
3L
9CEI
xx9>>!NOO
xx1}!RZZ- am,	 ::eC+,D99T9"Dvvdv#H>r   c                 ,   Uc  [         OUnUnUR                  n[        U/SS9nXR:  a  [        XSS9u  pxUR	                  UR                  USS95        [         R                  " U 5      n [         R                  " U5      n[        R                  " XUR                  S   5      u  pIUR                  U 5      n UR                  U5      nUR                  U5      nUR                  U	5      n	XI   nUR                  US   US   -
  5      nXR:  a  M  XFS   4$ )	a  "raw" version of k-means.

Returns
-------
code_book
    The lowest distortion codebook found.
avg_dist
    The average distance a observation is from a code in the book.
    Lower means the code_book matches the data better.

See Also
--------
kmeans : wrapper around k-means

Examples
--------
Note: not whitened in this example.

>>> import numpy as np
>>> from scipy.cluster.vq import _kmeans
>>> features  = np.array([[ 1.9,2.3],
...                       [ 1.5,2.5],
...                       [ 0.8,0.6],
...                       [ 0.4,1.8],
...                       [ 1.0,1.0]])
>>> book = np.array((features[0],features[2]))
>>> _kmeans(features,book)
(array([[ 1.7       ,  2.4       ],
       [ 0.73333333,  1.13333333]]), 0.40563916697728591)

r%   )maxlenFr4   r"   r   r   )r8   infr   r   appendmeanr9   r   update_cluster_meansshapeabs)
r-   guessthreshr!   r;   diffprev_avg_distsobs_codedistorthas_memberss
             r   _kmeansrY     s   @ zrBI66DD6!,N
-sEBbgggBg78jjo::h'!$!9!9#:C//!:L"N	jjo::h'JJy)	jj-*	vvnQ'.*;;< -  Q'''r   seed)rngc                   [        U[        5      (       a  [        U 5      nO[        X5      n[        XUS9n [        XUS9nUS:  a  [	        SU 35      e[        U5      S:w  a'  [        U5      S:  a  [	        SU 35      e[        XX6S9$ [        U5      nX:w  a  [	        S5      eUS:  a  [	        SU-  5      e[        U5      nUR                  n	[        U5       H&  n
[        XXV5      n[        XX6S9u  pX:  d  M"  UnUn	M(     WU	4$ )a  
Performs k-means on a set of observation vectors forming k clusters.

The k-means algorithm adjusts the classification of the observations
into clusters and updates the cluster centroids until the position of
the centroids is stable over successive iterations. In this
implementation of the algorithm, the stability of the centroids is
determined by comparing the absolute value of the change in the average
Euclidean distance between the observations and their corresponding
centroids against a threshold. This yields
a code book mapping centroids to codes and vice versa.

Parameters
----------
obs : ndarray
   Each row of the M by N array is an observation vector. The
   columns are the features seen during each observation.
   The features must be whitened first with the `whiten` function.

k_or_guess : int or ndarray
   The number of centroids to generate. A code is assigned to
   each centroid, which is also the row index of the centroid
   in the code_book matrix generated.

   The initial k centroids are chosen by randomly selecting
   observations from the observation matrix. Alternatively,
   passing a k by N array specifies the initial k centroids.

iter : int, optional
   The number of times to run k-means, returning the codebook
   with the lowest distortion. This argument is ignored if
   initial centroids are specified with an array for the
   ``k_or_guess`` parameter. This parameter does not represent the
   number of iterations of the k-means algorithm.

thresh : float, optional
   Terminates the k-means algorithm if the change in
   distortion since the last k-means iteration is less than
   or equal to threshold.

check_finite : bool, optional
    Whether to check that the input matrices contain only finite numbers.
    Disabling may give a performance gain, but may result in problems
    (crashes, non-termination) if the inputs do contain infinities or NaNs.
    Default: True
rng : `numpy.random.Generator`, optional
    Pseudorandom number generator state. When `rng` is None, a new
    `numpy.random.Generator` is created using entropy from the
    operating system. Types other than `numpy.random.Generator` are
    passed to `numpy.random.default_rng` to instantiate a ``Generator``.

Returns
-------
codebook : ndarray
   A k by N array of k centroids. The ith centroid
   codebook[i] is represented with the code i. The centroids
   and codes generated represent the lowest distortion seen,
   not necessarily the globally minimal distortion.
   Note that the number of centroids is not necessarily the same as the
   ``k_or_guess`` parameter, because centroids assigned to no observations
   are removed during iterations.

distortion : float
   The mean (non-squared) Euclidean distance between the observations
   passed and the centroids generated. Note the difference to the standard
   definition of distortion in the context of the k-means algorithm, which
   is the sum of the squared distances.

See Also
--------
kmeans2 : a different implementation of k-means clustering
   with more methods for generating initial centroids but without
   using a distortion change threshold as a stopping criterion.

whiten : must be called prior to passing an observation matrix
   to kmeans.

Notes
-----
For more functionalities or optimal performance, you can use
`sklearn.cluster.KMeans <https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html>`_.
`This <https://hdbscan.readthedocs.io/en/latest/performance_and_scalability.html#comparison-of-high-performance-implementations>`_
is a benchmark result of several implementations.

Examples
--------
>>> import numpy as np
>>> from scipy.cluster.vq import vq, kmeans, whiten
>>> import matplotlib.pyplot as plt
>>> features  = np.array([[ 1.9,2.3],
...                       [ 1.5,2.5],
...                       [ 0.8,0.6],
...                       [ 0.4,1.8],
...                       [ 0.1,0.1],
...                       [ 0.2,1.8],
...                       [ 2.0,0.5],
...                       [ 0.3,1.5],
...                       [ 1.0,1.0]])
>>> whitened = whiten(features)
>>> book = np.array((whitened[0],whitened[2]))
>>> kmeans(whitened,book)
(array([[ 2.3110306 ,  2.86287398],    # random
       [ 0.93218041,  1.24398691]]), 0.85684700941625547)

>>> codes = 3
>>> kmeans(whitened,codes)
(array([[ 2.3110306 ,  2.86287398],    # random
       [ 1.32544402,  0.65607529],
       [ 0.40782893,  2.02786907]]), 0.5196582527686241)

>>> # Create 50 datapoints in two clusters a and b
>>> pts = 50
>>> rng = np.random.default_rng()
>>> a = rng.multivariate_normal([0, 0], [[4, 1], [1, 4]], size=pts)
>>> b = rng.multivariate_normal([30, 10],
...                             [[10, 2], [2, 1]],
...                             size=pts)
>>> features = np.concatenate((a, b))
>>> # Whiten data
>>> whitened = whiten(features)
>>> # Find 2 clusters in the data
>>> codebook, distortion = kmeans(whitened, 2)
>>> # Plot whitened data and cluster centers in red
>>> plt.scatter(whitened[:, 0], whitened[:, 1])
>>> plt.scatter(codebook[:, 0], codebook[:, 1], c='r')
>>> plt.show()

r1   r   ziter must be at least 1, got z'Asked for 0 clusters. Initial book was )rS   r!   z1If k_or_guess is a scalar, it must be an integer.zAsked for %d clusters.)
isinstanceintr   r   rB   r   rY   r   rL   range_kpoints)r-   
k_or_guessiterrS   r    r[   r!   rR   k	best_distibookrF   	best_books                 r   r   r   L  s    F *c""S!S-
3L
9CZ\BEax8?@@ u~5>AFugNOOs&88 	E
AzLMM1u1A566
S
!C I4[)S>
II  ir   c                     UR                  U R                  S   [        U5      SS9nUR                  XCR                  S/5      R                  S9nUR                  XSS9$ )a  Pick k points at random in data (one row = one observation).

Parameters
----------
data : ndarray
    Expect a rank 1 or 2 array. Rank 1 are assumed to describe one
    dimensional data, rank 2 multidimensional data, in which case one
    row is one observation.
k : int
    Number of samples to generate.
rng : `numpy.random.Generator` or `numpy.random.RandomState`
    Random number generator.

Returns
-------
x : ndarray
    A 'k' by 'N' containing the initial centroids

r   F)sizereplacer   )dtyper"   )choicerP   r^   r9   rk   take)datarc   r[   r!   idxs        r   r`   r`     sW    ( **TZZ]Q*
?C
**S

A3 5 5*
6C77417%%r   c                 n   UR                  U SS9n[        R                  " U5      nU R                  S:X  aI  [        R
                  " XS9nUR                  US9nUR                  U5      nXcR                  U5      -  nGO1U R                  S   U R                  S   :  a  UR                  R                  X-
  SS9u  pxn	UR                  U[        U5      4S9nUR                  U5      nUSS2S4   U	-  UR                  U R                  S   UR                  S	5      -
  5      -  n
Xj-  nO[        R                  " [        R
                  " U R                  US9S
US9nUR                  U[        U5      4S9nUR                  U5      nXcR                  R                  U5      R                  -  nXd-  nU$ )ao  Returns k samples of a random variable whose parameters depend on data.

More precisely, it returns k observations sampled from a Gaussian random
variable whose mean and covariances are the ones estimated from the data.

Parameters
----------
data : ndarray
    Expect a rank 1 or 2 array. Rank 1 is assumed to describe 1-D
    data, rank 2 multidimensional data, in which case one
    row is one observation.
k : int
    Number of samples to generate.
rng : `numpy.random.Generator` or `numpy.random.RandomState`
    Random number generator.

Returns
-------
x : ndarray
    A 'k' by 'N' containing the initial centroids

r   r"   r   r!   )ri   F)full_matricesNr$   r%   )rA   r!   )rN   r8   r9   rA   xpxcovstandard_normalsqrtrP   linalgsvdr   
atleast_ndTcholesky)rn   rc   r[   r!   mu_covx_svhsVhs              r   
_krandinitr     sx   . 
A	B


1AyyA~wwt#Q'JJqM	WWT]	AA	&99==%=@ba_5JJqM4j2o

1

2(F GGG~~cggdff41D a%56JJqM		""4(***GAHr   c                 X   [        U R                  5      nUS:X  a	  U SS2S4   n U R                  S   nUR                  [        U5      U45      n[	        U5       H  nUS:X  a'  U [        X R                  S   5      SS24   XgSS24'   M0  [        USU2SS24   U SS9R                  SS9nXR                  5       -  n	U	R                  5       n
UR                  5       n[        R                  " U
5      n
U [        R                  " X5      SS24   XgSS24'   M     US:X  a	  USS2S4   nU$ )a  Picks k points in the data based on the kmeans++ method.

Parameters
----------
data : ndarray
    Expect a rank 1 or 2 array. Rank 1 is assumed to describe 1-D
    data, rank 2 multidimensional data, in which case one
    row is one observation.
k : int
    Number of samples to generate.
rng : `numpy.random.Generator` or `numpy.random.RandomState`
    Random number generator.

Returns
-------
init : ndarray
    A 'k' by 'N' containing the initial centroids.

References
----------
.. [1] D. Arthur and S. Vassilvitskii, "k-means++: the advantages of
   careful seeding", Proceedings of the Eighteenth Annual ACM-SIAM Symposium
   on Discrete Algorithms, 2007.
r   Nr   sqeuclidean)metricr"   )lenrP   emptyr^   r_   r	   r   rE   sumcumsumuniformr8   r9   searchsorted)rn   rc   r[   r!   rA   dimsinitre   D2probscumprobsrs               r   _kppr   @  s   4 tzz?DqyAtG}::a=D88SVTN#D1X6l3

1>ABDAJ tBQBqDz4>BBBJBvvxKE||~HAzz(+Hbooh:A=>DAJ  qyAqDzKr   )randompointsz++c                  .    [         R                  " SSS9  g)zPrint a warning when called.LOne of the clusters is empty. Re-run kmeans with a different initialization.   r&   N)r*   r+   r   r   r   _missing_warnr   v  s    MM C r   c                      [        S5      e)z!Raise a ClusterError when called.r   )r   r   r   r   _missing_raiser   }  s    
 H I Ir   )r+   raisec                &   [        U5      S:  a  [        SU S35      e [        U   n[	        U[         5      (       a  [        U 5      n
O[        X5      n
[        X
US9n [        XS9nU R                  S:X  a  SnO+U R                  S:X  a  U R                  S   nO[        S	5      e[        U 5      S:  d  [        U5      S:  a  [        S
5      eUS:X  d  [        U5      S:  ac  U R                  UR                  :w  a  [        S5      eUR                  S   nU R                  S:  a  UR                  S   U:w  a  [        S5      eOX[        U5      nUS:  a  [        SX4-  5      eX:w  a  [        R                  " SSS9   [        U   n[        U5      nU" XXz5      n[        R                   " U 5      n [        R                   " U5      n[#        U5       HS  n[%        XUS9S   n[&        R(                  " U UU5      u  nnUR+                  5       (       d  U" 5         UU)    UU) '   UnMU     U
R!                  U5      U
R!                  W5      4$ ! [         a  n	[        SU< 35      U	eSn	A	ff = f! [         a  n	[        SU< 35      U	eSn	A	ff = f)a,  
Classify a set of observations into k clusters using the k-means algorithm.

The algorithm attempts to minimize the Euclidean distance between
observations and centroids. Several initialization methods are
included.

Parameters
----------
data : ndarray
    A 'M' by 'N' array of 'M' observations in 'N' dimensions or a length
    'M' array of 'M' 1-D observations.
k : int or ndarray
    The number of clusters to form as well as the number of
    centroids to generate. If `minit` initialization string is
    'matrix', or if a ndarray is given instead, it is
    interpreted as initial cluster to use instead.
iter : int, optional
    Number of iterations of the k-means algorithm to run. Note
    that this differs in meaning from the iters parameter to
    the kmeans function.
thresh : float, optional
    (not used yet)
minit : str, optional
    Method for initialization. Available methods are 'random',
    'points', '++' and 'matrix':

    'random': generate k centroids from a Gaussian with mean and
    variance estimated from the data.

    'points': choose k observations (rows) at random from data for
    the initial centroids.

    '++': choose k observations accordingly to the kmeans++ method
    (careful seeding)

    'matrix': interpret the k parameter as a k by M (or length k
    array for 1-D data) array of initial centroids.
missing : str, optional
    Method to deal with empty clusters. Available methods are
    'warn' and 'raise':

    'warn': give a warning and continue.

    'raise': raise an ClusterError and terminate the algorithm.
check_finite : bool, optional
    Whether to check that the input matrices contain only finite numbers.
    Disabling may give a performance gain, but may result in problems
    (crashes, non-termination) if the inputs do contain infinities or NaNs.
    Default: True
rng : `numpy.random.Generator`, optional
    Pseudorandom number generator state. When `rng` is None, a new
    `numpy.random.Generator` is created using entropy from the
    operating system. Types other than `numpy.random.Generator` are
    passed to `numpy.random.default_rng` to instantiate a ``Generator``.

Returns
-------
centroid : ndarray
    A 'k' by 'N' array of centroids found at the last iteration of
    k-means.
label : ndarray
    label[i] is the code or index of the centroid the
    ith observation is closest to.

See Also
--------
kmeans

References
----------
.. [1] D. Arthur and S. Vassilvitskii, "k-means++: the advantages of
   careful seeding", Proceedings of the Eighteenth Annual ACM-SIAM Symposium
   on Discrete Algorithms, 2007.

Examples
--------
>>> from scipy.cluster.vq import kmeans2
>>> import matplotlib.pyplot as plt
>>> import numpy as np

Create z, an array with shape (100, 2) containing a mixture of samples
from three multivariate normal distributions.

>>> rng = np.random.default_rng()
>>> a = rng.multivariate_normal([0, 6], [[2, 1], [1, 1.5]], size=45)
>>> b = rng.multivariate_normal([2, 0], [[1, -1], [-1, 3]], size=30)
>>> c = rng.multivariate_normal([6, 4], [[5, 0], [0, 1.2]], size=25)
>>> z = np.concatenate((a, b, c))
>>> rng.shuffle(z)

Compute three clusters.

>>> centroid, label = kmeans2(z, 3, minit='points')
>>> centroid
array([[ 2.22274463, -0.61666946],  # may vary
       [ 0.54069047,  5.86541444],
       [ 6.73846769,  4.01991898]])

How many points are in each cluster?

>>> counts = np.bincount(label)
>>> counts
array([29, 51, 20])  # may vary

Plot the clusters.

>>> w0 = z[label == 0]
>>> w1 = z[label == 1]
>>> w2 = z[label == 2]
>>> plt.plot(w0[:, 0], w0[:, 1], 'o', alpha=0.5, label='cluster 0')
>>> plt.plot(w1[:, 0], w1[:, 1], 'd', alpha=0.5, label='cluster 1')
>>> plt.plot(w2[:, 0], w2[:, 1], 's', alpha=0.5, label='cluster 2')
>>> plt.plot(centroid[:, 0], centroid[:, 1], 'k*', label='centroids')
>>> plt.axis('equal')
>>> plt.legend(shadow=True)
>>> plt.show()

r   zInvalid iter (z), must be a positive integer.zUnknown missing method Nr1   rq   r%   z#Input of rank > 2 is not supported.zEmpty input is not supported.matrixzk array doesn't match data rankr   z$k array doesn't match data dimensionz-Cannot ask kmeans2 for %d clusters (k was %s)z$k was not an integer, was converted.r&   zUnknown init method r4   )r^   rB   _valid_miss_methKeyErrorr]   r   r   r   rA   rP   r   r*   r+   _valid_init_methr   r8   r9   r_   r   r   rO   all)rn   rc   rb   rS   minitmissingr    r[   	miss_mether!   r;   dnc	init_methre   labelnew_code_bookrX   s                      r   r   r     s   t 4y1}>$/MNOOG$W-	 !ST"T%Dl;D!IyyA~	aJJqM>??t}qGI.2899 GI.299	&>??__Q99q=Y__Q/14CDD^6 +.0_= > >_MM@QO	<(/I %S)C!$3;I::dD

9%I4[4>qA%(%=%=dE2%N"{  K*3[L*AM;,'!	  ::i "**U"333q  G27+>?QFGJ  	F3E9=>AE	Fs/   	I 	I1 
I.I))I.1
J;JJ)T)h㈵>N)   r   T)
   r   r   r+   T)'__doc__r*   numpyr8   collectionsr   scipy._lib._array_apir   r   r   r   scipy._lib._utilr   r	   r
   
scipy._libr   rs   scipy.spatial.distancer    r   __docformat____all__	Exceptionr   r   r   r:   rY   r   r`   r   r   r   r   r   r   r   r   r   r   <module>r      s   AD    2 2 - ( "
/	9 	8vG5T4n4(n Fc c  c L&40f0f )HDI  I *NC  F19)-u46:u4 u4r   