Home | Trees | Indices | Help |
|
---|
|
Perform a Locally Linear Embedding analysis on the data. Internal variables of interest: self.training_projection -- the LLE projection of the training data (defined when training finishes) self.desired_variance -- variance limit used to compute intrinsic dimensionality Based on the algorithm outlined in 'An Introduction to Locally Linear Embedding' by L. Saul and S. Roweis, using improvements suggested in 'Locally Linear Embedding for Classification' by D. deRidder and R.P.W. Duin. References: Roweis, S. and Saul, L., Nonlinear dimensionality reduction by locally linear embedding, Science 290 (5500), pp. 2323-2326, 2000. Original code contributed by: Jake VanderPlas, University of Washington vanderplas@astro.washington.edu
|
|||
Inherited from Node | |||
---|---|---|---|
__metaclass__ This Metaclass is meant to overwrite doc strings of methods like execute, stop_training, inverse with the ones defined in the corresponding private methods _execute, _stop_training, _inverse, etc... |
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from |
|||
Inherited from Cumulator | |||
---|---|---|---|
|
|||
|
|||
|
|||
Inherited from Node | |||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|
|||
Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
_train_seq List of tuples: [(training-phase1, stop-training-phase1), (training-phase2, stop_training-phase2), ... |
|||
dtype dtype |
|||
input_dim Input dimensions |
|||
output_dim Output dimensions |
|||
supported_dtypes Supported dtypes |
|
Keyword Arguments: k -- number of nearest neighbors to use r -- regularization constant; if None, r is automatically computed using the method presented in deRidder and Duin; this method involves solving an eigenvalue problem for every data point, and can slow down the algorithm If specified, it multiplies the trace of the local covariance matrix of the distances, as in Saul & Roweis (faster) svd -- if True, use SVD to compute the projection matrix; SVD is slower but more stable verbose -- if True, displays information about the progress of the algorithm output_dim -- number of dimensions to output or a float between 0.0 and 1.0. In the latter case, output_dim specifies the desired fraction of variance to be exaplained, and the final number of output dimensions is known at the end of training (e.g., for 'output_dim=0.95' the algorithm will keep as many dimensions as necessary in order to explain 95% of the input variance)
|
|
|
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
|
Transform the data list to an array object and reshape it.
|
Return True if the node can be inverted, False otherwise.
|
Return True if the node can be trained, False otherwise.
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Fri Oct 9 06:08:49 2009 | http://epydoc.sourceforge.net |