Package | Description |
---|---|
org.joone.engine | |
org.joone.net | |
org.joone.structure |
Modifier and Type | Class and Description |
---|---|
class |
BiasedLinearLayer
This layer consists of linear neurons, i.e.
|
class |
ContextLayer
The context layer is similar to the linear layer except that
it has an auto-recurrent connection between its output and input.
|
class |
DelayLayer
Delay unit to create temporal windows from time series
O---> Yk(t-N) |
class |
GaussianLayer
This layer implements the Gaussian Neighborhood SOM strategy.
|
class |
GaussLayer
The output of a Gauss(ian) layer neuron is the sum of the weighted input values,
applied to a gaussian curve (
exp(- x * x) ). |
class |
LinearLayer
The output of a linear layer neuron is the sum of the weighted input values,
scaled by the beta parameter.
|
class |
LogarithmicLayer
This layer implements a logarithmic transfer function.
|
class |
MemoryLayer |
class |
RbfGaussianLayer
This class implements the nonlinear layer in Radial Basis Function (RBF)
networks using Gaussian functions.
|
class |
RbfLayer
This is the basis (helper) for radial basis function layers.
|
class |
SigmoidLayer
The output of a sigmoid layer neuron is the sum of the weighted input values,
applied to a sigmoid function.
|
class |
SimpleLayer
This abstract class represents layers that are composed
by neurons that implement some transfer function.
|
class |
SineLayer
The output of a sine layer neuron is the sum of the weighted input values,
applied to a sine (
sin(x) ). |
class |
SoftmaxLayer
The outputs of the Softmax layer must be interpreted as probabilities.
|
class |
TanhLayer
Layer that applies the tangent hyperbolic transfer function
to its input patterns
|
class |
WTALayer
This layer implements the Winner Takes All SOM strategy.
|
Modifier and Type | Field and Description |
---|---|
protected Layer |
RTRLLearnerFactory.inputLayer
The input layer
|
protected Layer |
RTRLLearnerFactory.Weight.layer
The joone layer which is used if this weight is a bias
|
protected Layer |
RTRLLearnerFactory.Node.layer
The layer at which this node is found
|
protected Layer |
RTRLLearnerFactory.outputLayer
The output layer from which we calculate errors and which we use to determine
if a node is in T
|
Constructor and Description |
---|
RTRLLearnerFactory.Node(Layer layer,
int index)
Create a new node from a joone layer
|
RTRLLearnerFactory.Weight(Layer layer,
int i,
int K)
Initialise this weight from a joone layer
|
Modifier and Type | Class and Description |
---|---|
class |
NestedNeuralLayer |
Modifier and Type | Method and Description |
---|---|
Layer[] |
NeuralNet.calculateOrderedLayers()
This method calculates the order of the layers of the network, from the input to the output.
|
Layer |
NeuralNet.findInputLayer()
Returns the input layer, by searching for it following
the rules written in Layer.isInputLayer.
|
Layer |
NeuralNet.findOutputLayer()
Returns the output layer by searching for it following
the rules written in Layer.isOutputLayer.
|
Layer |
NeuralNet.getInputLayer()
Returns the input layer of the network.
|
Layer |
NeuralNet.getLayer(java.lang.String layerName) |
Layer[] |
NeuralNet.getOrderedLayers() |
Layer |
NeuralNet.getOutputLayer()
Returns the output layer of the network.
|
Modifier and Type | Method and Description |
---|---|
void |
NeuralNet.addLayer(Layer layer) |
void |
NeuralNet.addLayer(Layer layer,
int tier) |
void |
NeuralNet.removeLayer(Layer layer) |
void |
NeuralNet.setInputLayer(Layer newLayer) |
void |
NeuralNet.setOrderedLayers(Layer[] orderedLayers)
This method permits to set externally a particular order to
traverse the Layers.
|
void |
NeuralNet.setOutputLayer(Layer newLayer) |
Modifier and Type | Class and Description |
---|---|
class |
NetworkLayer
Wraps an existing joone network into a single layer.
|
Modifier and Type | Field and Description |
---|---|
protected Layer |
NodesAndWeights.Weight.layer
The joone layer which is used if this weight is a bias
|
protected Layer |
NodesAndWeights.Node.layer
The layer at which this node is found
|
Modifier and Type | Method and Description |
---|---|
protected Layer |
Nakayama.findInputLayer(Synapse aSynapse)
Finds the input layer of a synapse.
|
protected Layer |
Nakayama.findOutputLayer(Synapse aSynapse)
Finds the output layer of a synapse.
|
Modifier and Type | Method and Description |
---|---|
void |
Nakayama.addLayer(Layer aLayer)
Adds layers to this optimizer.
|
protected double |
Nakayama.getSumAbsoluteWeights(Layer aLayer,
int aNeuron)
Sums up all the absolute values of the output weights of a neuron within a layer.
|
static void |
NodeFactory.setNodeFunctions(AbstractNode node,
Layer layer)
Set the transport and derivative functions of a node from the type of layer it is found in
|
Constructor and Description |
---|
NodesAndWeights.Node(Layer layer,
int index,
int order)
Create a new node from a joone layer and also check to see if it has a valid initial state
|
NodesAndWeights.Weight(Layer layer,
int i,
int I,
int J)
Initialise this weight from a joone layer
|
Submit Feedback to pmarrone@users.sourceforge.net