Home | Trees | Indices | Help |
|
---|
|
Perform Non-linear Blind Source Separation using Slow Feature Analysis. This node is designed to iteratively extract statistically independent sources from (in principle) arbitrary invertible nonlinear mixtures. The method relies on temporal correlations in the sources and consists of a combination of nonlinear SFA and a projection algorithm. More details can be found in the reference given below (once it's published). The node has multiple training phases. The number of training phases depends on the number of sources that must be extracted. The recommended way of training this node is through a container flow: >>> flow = mdp.Flow([XSFANode()]) >>> flow.train(x) doing so will automatically train all training phases. The argument 'x' to the flow.train method can be an array or a list of iterables (see the section about Iterators in the MDP tutorial for more info). If the number of training samples is large, you may run into memory problems: use data iterators and chunk training to reduce memory usage. If you need to debug training and/or execution of this node, the suggested approach is to use the capabilities of mdp.binet. For example: >>> flow = mdp.Flow([XSFANode()]) >>> tr_filename = binet.show_training(flow=flow, data_iterators=x) >>> ex_filename, out = binet.show_execution(flow, x=x) this will run training and execution with binet inspection. Snapshots of the internal flow state for each training phase and execution step will be opened in a web brower and presented as a slideshow. References: Sprekeler, H., Zito, T., and Wiskott, L. (2009). An Extension of Slow Feature Analysis for Nonlinear Blind Source Separation Journal of Machine Learning Research, submitted [pdf link follows]
|
|||
Inherited from Node | |||
---|---|---|---|
__metaclass__ This Metaclass is meant to overwrite doc strings of methods like execute, stop_training, inverse with the ones defined in the corresponding private methods _execute, _stop_training, _inverse, etc... |
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|
|||
flow Read-only internal flow property. |
|||
Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
_train_seq List of tuples: [(training-phase1, stop-training-phase1), (training-phase2, stop_training-phase2), ... |
|||
dtype dtype |
|||
input_dim Input dimensions |
|||
output_dim Output dimensions |
|||
supported_dtypes Supported dtypes |
|
Keyword arguments: basic_exp -- a tuple (node, args, kwargs) defining the node used for the basic nonlinear expansion. It is assumed that the mixture is linearly invertible after this expansion. The higher the complexity of the nonlinearity, the higher are the chances of inverting the unknown mixture. On the other hand, high complexity of the nonlinear expansion increases the danger of numeric instabilities, which can cause singularities in the simulation or errors in the source estimation. The trade-off has to be evaluated carefully. Default: (mdp.nodes.PolynomialExpansionNode, (2, ), {}) intern_exp -- a tuple (node, args, kwargs) defining the node used for the internal nonlinear expansion of the estimated sources to be removed from the input space. The same trade-off as for basic_exp is valid here. Default: (mdp.nodes.PolynomialExpansionNode, (10, ), {}) svd -- enable Singular Value Decomposition for normalization and regularization. Use it if the node complains about singular covariance matrices. verbose -- show some progress during training.
|
|
|
|
Return the list of dtypes supported by this node.
|
|
|
|
|
|
Return True if the node can be inverted, False otherwise.
|
|
flowRead-only internal flow property.
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Fri Oct 9 06:08:50 2009 | http://epydoc.sourceforge.net |