Source code documentation: WAVIX
Manual pages overview and cross references
delete debug utils from path
ApplicationRoot\WavixIV
13-Feb-2009 14:24:45
618 bytes
This file has been generated automatically by function exetimestamp_create
ApplicationRoot\WavixIV
13-Feb-2009 14:24:50
401 bytes
ApplicationRoot\WavixIV
11-Mar-2009 05:34:58
327 bytes
wavix - hoofdprogramma van de wavixIV applicatie,
installeert het wavix scherm of voert batchjob uit
CALL:
wavix(func, varargin)
INPUT:
func: <string> met in batch mode uit te voeren functie,
mogelijke waarden:
'matroos2dia'
OUTPUT:
geen directe uitvoer, het wavix scherm wordt geopend
See also: wavixmain
ApplicationRoot\WavixIV
18-Sep-2010 18:38:03
1867 bytes
wavixshowdata - Visualisatie van ALLE data in wavix
CALL:
stormnetshowdata(signature,udnew,ind)
INPUT:
signature: <
udnew: <undoredo object> met de centrale database
ind: <cell array> een CELL array met
struct arrays met velden
'type'
'subs'
OUTPUT:
geen directe uitvoer, alle objecten die gerelateerd zijn aan data in het
werkgebied worden geactualiseerd.
ApplicationRoot\WavixIV
14-Oct-2007 19:16:48
2553 bytes
wavixshowopts - Visualisatie van ALLE opties in wavix
CALL:
wavixshowopts(signature,udnew,ind)
INPUT:
signature:
udnew: structure met data uit werkgebied
ind: een CELL array met
struct arrays met velden
'type'
'subs'
OUTPUT:
geen directe uitvoer, alle objecten die gerelateerd zijn aan data in het
werkgebied worden geactualiseerd.
ApplicationRoot\WavixIV
14-Oct-2007 19:36:26
1589 bytes
ADDNNTEMPPATH Add NNT temporary directory to path.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:19:16
443 bytes
BOILER_NET Boilerplate script for net input functions.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:12
2057 bytes
BOILER_PERFORM Boilerplate code for performance functions.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:14
4507 bytes
PROCESS FUNCTION BOILERPLATE CODE
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:14
4711 bytes
TRANSFER_BOILER Boilerplate code for transfer functions.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:16
2763 bytes
BOILER_WEIGHT Boilerplate script for weight functions.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:16
1399 bytes
BOXDIST Box distance function.
Syntax
d = boxdist(pos);
Description
BOXDIST is a layer distance function used to find
the distances between the layer's neurons given their
positions.
BOXDIST(pos) takes one argument,
POS - NxS matrix of neuron positions.
and returns the SxS matrix of distances.
BOXDIST is most commonly used in conjunction with layers
whose topology function is GRIDTOP.
Examples
Here we define a random matrix of positions for 10 neurons
arranged in three dimensional space and find their distances.
pos = rand(3,10);
d = boxdist(pos)
Network Use
You can create a standard network that uses BOXDIST
as a distance function by calling NEWSOM.
To change a network so a layer's topology uses BOXDIST set
NET.layers{i}.distanceFcn to 'boxdist'.
In either case, call SIM to simulate the network with BOXDIST.
See NEWSOM for training and adaption examples.
Algorithm
The box distance D between two position vectors Pi and Pj
from a set of S vectors is:
Dij = max(abs(Pi-Pj))
See also SIM, DIST, MANDIST, LINKDIST.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:10
1503 bytes
CALCA Calculate network outputs and other signals.
Syntax
[Ac,N,LWZ,IWZ,BZ] = calca(net,Pd,Ai,Q,TS)
Description
This function calculates the outputs of each layer in
response to a networks delayed inputs and initial layer
delay conditions.
[Ac,N,LWZ,IWZ,BZ] = CALCA(NET,Pd,Ai,Q,TS) takes,
NET - Neural network.
Pd - Delayed inputs.
Ai - Initial layer delay conditions.
Q - Concurrent size.
TS - Time steps.
and returns,
Ac - Combined layer outputs = [Ai, calculated layer outputs].
N - Net inputs.
LWZ - Weighted layer outputs.
IWZ - Weighted inputs.
BZ - Concurrent biases.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, three neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],3,[0 2 4]);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
Here is a single (Q = 1) input sequence P with 8 timesteps (TS = 8),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4 0.7 0.2 0.1};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,8,1,Pc)
Here the two initial layer delay conditions for each of the
three neurons are defined:
Ai = {[0.5; 0.1; 0.2] [0.6; 0.5; 0.2]};
Here we calculate the network's combined outputs Ac, and other
signals described above..
[Ac,N,LWZ,IWZ,BZ] = calca(net,Pd,Ai,1,8)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:18
4019 bytes
CALCA1 Calculate network signals for one time step.
Syntax
[A,N,LWZ,IWZ,BZ] = CALCA1(NET,PD,Ai,Q)
Description
This function calculates the outputs of each layer in
response to a networks delayed inputs and initial layer
delay conditions, for a single time step.
Calculating outputs for a single time step is useful for
sequential iterative algorithms such as TRAINS which
which need to calculate the network response for each
time step individually.
[Ac,N,LWZ,IWZ,BZ] = CALCA1(NET,Pd,Ai,Q) takes,
NET - Neural network.
Pd - Delayed inputs for a single timestep.
Ai - Initial layer delay conditions for a single timestep.
Q - Concurrent size.
and returns,
A - Layer outputs for the timestep.
N - Net inputs for the timestep.
LWZ - Weighted layer outputs for the timestep.
IWZ - Weighted inputs for the timestep.
BZ - Concurrent biases for the timestep.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, three neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],3,[0 2 4]);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
Here is a single (Q = 1) input sequence P with 8 timesteps (TS = 8),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4 0.7 0.2 0.1};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,8,1,Pc)
Here the two initial layer delay conditions for each of the
three neurons are defined:
Ai = {[0.5; 0.1; 0.2] [0.6; 0.5; 0.2]};
Here we calculate the network's combined outputs Ac, and other
signals described above, for timestep 1.
[A,N,LWZ,IWZ,BZ] = calca1(net,Pd(:,:,1),Ai,1)
We can calculate the new layer delay states from Ai and A,
then calculate the signals for timestep 2.
Ai2 = [Ai(:,2:end) A];
[A2,N,LWZ,IWZ,BZ] = calca1(net,Pd(:,:,2),Ai2,1)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:18
3660 bytes
CALCE Calculate layer errors.
Synopsis
El = calce(net,Ac,Tl,TS)
Description
This function calculates the errors of each layer in
response to layer outputs and targets.
El = CALCE(NET,Ac,Tl,TS) takes,
NET - Neural network.
Ac - Combined layer outputs.
Tl - Layer targets.
Q - Concurrent size.
and returns,
El - Layer errors.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, two neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],2);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
Here is a single (Q = 1) input sequence P with 5 timesteps (TS = 5),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,5,1,Pc);
Here the two initial layer delay conditions for each of the
two neurons are defined, and the networks combined outputs Ac
and other signals are calculated.
Ai = {[0.5; 0.1] [0.6; 0.5]};
[Ac,N,LWZ,IWZ,BZ] = calca(net,Pd,Ai,1,5);
Here we define the layer targets for the two neurons for
each of the five time steps, and calculate the layer errors.
Tl = {[0.1;0.2] [0.3;0.1], [0.5;0.6] [0.8;0.9], [0.5;0.1]};
El = calce(net,Ac,Tl,5)
Here we view the network's error for layer 1 at timestep 2.
El{1,2}
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:17:28
2089 bytes
CALCE1 Calculate layer errors for one time step.
Synopsis
El = calce1(net,A,Tl)
Description
This function calculates the errors of each layer in
response to layer outputs and targets, for a single time step.
Calculating errors for a single time step is useful for
sequential iterative algorithms such as TRAINS which
need to calculate the network response for each
time step individually.
El = CALCE1(NET,A,Tl) takes,
NET - Neural network.
A - Layer outputs, for a single time step.
Tl - Layer targets, for a single time step.
and returns,
El - Layer errors, for a single time step.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, two neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],2);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
Here is a single (Q = 1) input sequence P with 5 timesteps (TS = 5),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,5,1,Pc);
Here the two initial layer delay conditions for each of the
two neurons are defined, and the networks combined outputs Ac
and other signals are calculated.
Ai = {[0.5; 0.1] [0.6; 0.5]};
[Ac,N,LWZ,IWZ,BZ] = calca(net,Pd,Ai,1,5);
Here we define the layer targets for the two neurons for
each of the five time steps, and calculate the layer errors
using the first time step layer output Ac(:,5) (The five
is found by adding the number of layer delays, 2, to the
time step 1.), and the first time step targets Tl(:,1).
Tl = {[0.1;0.2] [0.3;0.1], [0.5;0.6] [0.8;0.9], [0.5;0.1]};
El = calce1(net,Ac(:,3),Tl(:,1))
Here we view the network's error for layer 1.
El{1}
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:17:30
2469 bytes
CALCERR Calculates matrix or cell array errors.
E = CALCERR(T,A)
T - MxN matrix.
A - MxN matrix.
Returns
D - MxN matrix A-B.
E = CALCERR(A,B)
T - MxN cell array of matrices A{i,j}.
A - MxN cell array of matrices B{i,j}.
Returns
D - MxN cell array of matrices A{i,j}-B{i,j}.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:17:34
767 bytes
CALCFDOT Calculate derivatives of transfer functions for use in dynamic gradient functions. Synopsis [S] = calcfdot(i,TF,transferParam,TS,Q,Ae,numLayerDelays,N,extrazeros,layerSize) Warning!! This function may be altered or removed in future releases of the Neural Network Toolbox. We recommend you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:20
1245 bytes
CALCGBTT Calculate bias and weight performance gradients using the backpropagation through time algorithm.
Synopsis
[gB,gIW,gLW,gA] = calcgbtt(net,Q,PD,BZ,IWZ,LWZ,N,Ac,gE,TS)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
16-Jun-2006 21:37:02
18515 bytes
CALCGFP Calculate bias and weight performance gradients. Synopsis [gB,gIW,gLW] = calcgfp(net,Q,PD,BZ,IWZ,LWZ,N,Ac,gE,TS,time_base) Warning!! This function may be altered or removed in future releases of the Neural Network Toolbox. We recommend you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:22
22684 bytes
CALCGRAD Calculate bias and weight performance gradients. Synopsis [gB,gIW,gIW] = calcgrad(net,Q,PD,BZ,IWZ,LWZ,N,Ac,gE,TS) Warning!! This function may be altered or removed in future releases of the Neural Network Toolbox. We recommend you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:22
6466 bytes
CALCGX Calculate weight and bias performance gradient as a single vector.
Syntax
[gX,normgX] = calcgx(net,X,Pd,BZ,IWZ,LWZ,N,Ac,El,perf,Q,TS);
Description
This function calculates the gradient of a network's performance
with respect to its vector of weight and bias values X.
If the network has no layer delays with taps greater than 0
the result is the true gradient.
If the network as layer delays greater than 0, the result is
the Elman gradient, an approximation of the true gradient.
[gX,normgX] = CALCGX(NET,X,Pd,BZ,IWZ,LWZ,N,Ac,El,perf,Q,TS) takes,
NET - Neural network.
X - Vector of weight and bias values.
Pd - Delayed inputs.
BZ - Concurrent biases.
IWZ - Weighted inputs.
LWZ - Weighted layer outputs.
N - Net inputs.
Ac - Combined layer outputs.
El - Layer errors.
perf - Network performance.
Q - Concurrent size.
TS - Time steps.
and returns,
gX - Gradient dPerf/dX.
normgX - Norm of gradient.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, two neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],2);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
Here is a single (Q = 1) input sequence P with 5 timesteps (TS = 5),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,5,1,Pc);
Here the two initial layer delay conditions for each of the
two neurons, and the layer targets for the two neurons over
five timesteps are defined.
Ai = {[0.5; 0.1] [0.6; 0.5]};
Tl = {[0.1;0.2] [0.3;0.1], [0.5;0.6] [0.8;0.9], [0.5;0.1]};
Here the network's weight and bias values are extracted, and
the network's performance and other signals are calculated.
X = getx(net);
[perf,El,Ac,N,BZ,IWZ,LWZ] = calcperf(net,X,Pd,Tl,Ai,1,5);
Finally we can use CALCGX to calculate the gradient of performance
with respect to the weight and bias values X.
[gX,normgX] = calcgx(net,X,Pd,BZ,IWZ,LWZ,N,Ac,El,perf,1,5);
See also CALCJX, CALCJEJJ.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:24
3686 bytes
CALCJEJJ Calculate Jacobian performance vector.
Syntax
[je,jj,normje] = calcjejj(net,Pd,BZ,IWZ,LWZ,N,Ac,El,Q,TS,MR)
Description
This function calculates two values (related to the Jacobian
of a network) required to calculate the network's Hessian,
in a memory efficient way.
Two values needed to calculate the Hessian of a network are
J*E (Jacobian times errors) and J'J (Jacobian squared).
However the Jacobian J can take up a lot of memory.
This function calculates J*E and J'J by dividing up training
vectors into groups, calculating partial Jacobians Ji and
its associated values Ji*Ei and Ji'Ji, then summing the
partial values into the full J*E and J'J values.
This allows the J*E and J'J values to be calculated with a
series of smaller Ji matrices, instead of a larger J matrix.
[je,jj,normgX] = CALCJEJJ(NET,PD,BZ,IWZ,LWZ,N,Ac,El,Q,TS,MR) takes,
NET - Neural network.
PD - Delayed inputs.
BZ - Concurrent biases.
IWZ - Weighted inputs.
LWZ - Weighted layer outputs.
N - Net inputs.
Ac - Combined layer outputs.
El - Layer errors.
Q - Concurrent size.
TS - Time steps.
MR - Memory reduction factor.
and returns,
je - Jacobian times errors.
jj - Jacobian transposed time the Jacobian.
normgx - Magnitute of the gradient.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, two neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],2);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
Here is a single (Q = 1) input sequence P with 5 timesteps (TS = 5),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,5,1,Pc);
Here the two initial layer delay conditions for each of the
two neurons, and the layer targets for the two neurons over
five timesteps are defined.
Ai = {[0.5; 0.1] [0.6; 0.5]};
Tl = {[0.1;0.2] [0.3;0.1], [0.5;0.6] [0.8;0.9], [0.5;0.1]};
Here the network's weight and bias values are extracted, and
the network's performance and other signals are calculated.
[perf,El,Ac,N,BZ,IWZ,LWZ] = calcperf(net,X,Pd,Tl,Ai,1,5);
Finally we can use CALCGX to calculate the Jacobian times error,
Jacobian squared, and the norm of the Jocobian times error using
a memory reduction of 2.
[je,jj,normje] = calcjejj(net,Pd,BZ,IWZ,LWZ,N,Ac,El,1,5,2);
The results should be the same whatever the memory reduction
used. Here a memory reduction of 3 is used.
[je,jj,normje] = calcjejj(net,Pd,BZ,IWZ,LWZ,N,Ac,El,1,5,3);
See also CALCJX, CALCJEJJ.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:24
6162 bytes
CALCJX Calculate weight and bias performance Jacobian as a single matrix.
Syntax
jx = calcjx(net,PD,BZ,IWZ,LWZ,N,Ac,Q,TS)
Description
This function calculates the Jacobian of a network's errors
with respect to its vector of weight and bias values X.
jX = CALCJX(NET,PD,BZ,IWZ,LWZ,N,Ac,Q,TS) takes,
NET - Neural network.
PD - Delayed inputs.
BZ - Concurrent biases.
IWZ - Weighted inputs.
LWZ - Weighted layer outputs.
N - Net inputs.
Ac - Combined layer outputs.
Q - Concurrent size.
TS - Time steps.
and returns,
jX - Jacobian of network errors with respect to X.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, two neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],2);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
Here is a single (Q = 1) input sequence P with 5 timesteps (TS = 5),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,5,1,Pc);
Here the two initial layer delay conditions for each of the
two neurons, and the layer targets for the two neurons over
five timesteps are defined.
Ai = {[0.5; 0.1] [0.6; 0.5]};
Tl = {[0.1;0.2] [0.3;0.1], [0.5;0.6] [0.8;0.9], [0.5;0.1]};
Here the network's weight and bias values are extracted, and
the network's performance and other signals are calculated.
[perf,El,Ac,N,BZ,IWZ,LWZ] = calcperf(net,X,Pd,Tl,Ai,1,5);
Finally we can use CALCJX to calculate the Jacobian.
jX = calcjx(net,Pd,BZ,IWZ,LWZ,N,Ac,1,5);
See also CALCGX, CALCJEJJ.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:26
9776 bytes
CALCJXBT Calculate weight and bias performance Jacobian as a single matrix.
Syntax
jx = calcjxbt(net,PD,BZ,IWZ,LWZ,N,Ac,Q,TS)
Description
This function calculates the Jacobian of a network's errors
with respect to its vector of weight and bias values X.
jX = CALCJXBT(NET,PD,BZ,IWZ,LWZ,N,Ac,Q,TS) takes,
NET - Neural network.
PD - Delayed inputs.
BZ - Concurrent biases.
IWZ - Weighted inputs.
LWZ - Weighted layer outputs.
N - Net inputs.
Ac - Combined layer outputs.
Q - Concurrent size.
TS - Time steps.
and returns,
jX - Jacobian of network errors with respect to X.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, two neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],2);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
We initialize the weights to specific values:
net.IW{1}=[0.1;0.2];
net.LW{1}=[0.01 0.02 0.03 0.04; 0.05 0.06 0.07 0.07];
net.b{1}=[0.3; 0.4];
Here is a single (Q = 1) input sequence P with 5 timesteps (TS = 5),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,5,1,Pc);
Here the two initial layer delay conditions for each of the
two neurons, and the layer targets for the two neurons over
five timesteps are defined.
Ai = {[0.5; 0.1] [0.6; 0.5]};
Tl = {[0.1;0.2] [0.3;0.1], [0.5;0.6] [0.8;0.9], [0.5;0.1]};
Here the network's weight and bias values are extracted, and
the network's performance and other signals are calculated.
[perf,El,Ac,N,BZ,IWZ,LWZ] = calcperf(net,X,Pd,Tl,Ai,1,5);
Finally we can use CALCJXBT to calculate the Jacobian.
jX = calcjxbt(net,Pd,BZ,IWZ,LWZ,N,Ac,1,5);
IMPORTANT: If you use the regular CALCJX the gradient values will
differ because the dynamics are not being considered.
See also CALCGX, CALCJXFP.
ApplicationRoot\WavixIV\neural501
14-Nov-2005 19:20:42
4173 bytes
CALCJXFP Calculate weight and bias performance Jacobian as a single matrix.
Syntax
jx = calcjxfp(net,PD,BZ,IWZ,LWZ,N,Ac,Q,TS)
Description
This function calculates the Jacobian of a network's errors
with respect to its vector of weight and bias values X.
jX = CALCJXFP(NET,PD,BZ,IWZ,LWZ,N,Ac,Q,TS) takes,
NET - Neural network.
PD - Delayed inputs.
BZ - Concurrent biases.
IWZ - Weighted inputs.
LWZ - Weighted layer outputs.
N - Net inputs.
Ac - Combined layer outputs.
Q - Concurrent size.
TS - Time steps.
and returns,
jX - Jacobian of network errors with respect to X.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, two neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],2);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
We initialize the weights to specific values:
net.IW{1}=[0.1;0.2];
net.LW{1}=[0.01 0.02 0.03 0.04; 0.05 0.06 0.07 0.07];
net.b{1}=[0.3; 0.4];
Here is a single (Q = 1) input sequence P with 5 timesteps (TS = 5),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,5,1,Pc);
Here the two initial layer delay conditions for each of the
two neurons, and the layer targets for the two neurons over
five timesteps are defined.
Ai = {[0.5; 0.1] [0.6; 0.5]};
Tl = {[0.1;0.2] [0.3;0.1], [0.5;0.6] [0.8;0.9], [0.5;0.1]};
Here the network's weight and bias values are extracted, and
the network's performance and other signals are calculated.
[perf,El,Ac,N,BZ,IWZ,LWZ] = calcperf(net,X,Pd,Tl,Ai,1,5);
Finally we can use CALCJXFP to calculate the Jacobian.
jX = calcjxfp(net,Pd,BZ,IWZ,LWZ,N,Ac,1,5);
IMPORTANT: If you use the regular CALCJX the gradient values will
differ because the dynamics is not being considered.
See also CALCGX, CALCJXBT.
ApplicationRoot\WavixIV\neural501
14-Nov-2005 19:20:44
4147 bytes
CALCPD Calculate delayed network inputs.
Syntax
Pd = calcpd(net,TS,Q,Pc)
Description
This function calculates the results of passing the network
inputs through each input weights tap delay line.
Pd = CALCPD(NET,TS,Q,Pc) takes,
NET - Neural network.
TS - Time steps.
Q - Concurrent size.
Pc - Combined inputs = [initial delay conditions, network inputs].
and returns,
Pd - Delayed inputs.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, three neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps.
net = newlin([0 1],3,[0 2 4]);
Here is a single (Q = 1) input sequence P with 8 timesteps (TS = 8).
P = {0 0.1 0.3 0.6 0.4 0.7 0.2 0.1};
Here we define the 4 initial input delay conditions Pi.
Pi = {0.2 0.3 0.4 0.1};
The delayed inputs (the inputs after passing through the tap
delays) can be calculated with CALCPD.
Pc = [Pi P];
Pd = calcpd(net,8,1,Pc)
Here we view the delayed inputs for input weight going to layer 1,
from input 1 at timesteps 1 and 2.
Pd{1,1,1}
Pd{1,1,2}
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:17:46
2036 bytes
CALCPERF Calculate network outputs, signals, and performance.
Synopsis
[perf,El,Ac,N,BZ,IWZ,LWZ]=calcperf(net,X,Pd,Tl,Ai,Q,TS)
Description
This function calculates the outputs of each layer in
response to a networks delayed inputs and initial layer
delay conditions.
[perf,El,Ac,N,LWZ,IWZ,BZ] = CALCPERF(NET,X,Pd,Tl,Ai,Q,TS) takes,
NET - Neural network.
X - Network weight and bias values in a single vector.
Pd - Delayed inputs.
Tl - Layer targets.
Ai - Initial layer delay conditions.
Q - Concurrent size.
TS - Time steps.
and returns,
perf - Network performance.
El - Layer errors.
Ac - Combined layer outputs = [Ai, calculated layer outputs].
N - Net inputs.
LWZ - Weighted layer outputs.
IWZ - Weighted inputs.
BZ - Concurrent biases.
Examples
Here we create a linear network with a single input element
ranging from 0 to 1, two neurons, and a tap delay on the
input with taps at 0, 2, and 4 timesteps. The network is
also given a recurrent connection from layer 1 to itself with
tap delays of [1 2].
net = newlin([0 1],2);
net.layerConnect(1,1) = 1;
net.layerWeights{1,1}.delays = [1 2];
Here is a single (Q = 1) input sequence P with 5 timesteps (TS = 5),
and the 4 initial input delay conditions Pi, combined inputs Pc,
and delayed inputs Pd.
P = {0 0.1 0.3 0.6 0.4};
Pi = {0.2 0.3 0.4 0.1};
Pc = [Pi P];
Pd = calcpd(net,5,1,Pc);
Here the two initial layer delay conditions for each of the
two neurons are defined.
Ai = {[0.5; 0.1] [0.6; 0.5]};
Here we define the layer targets for the two neurons for
each of the five time steps.
Tl = {[0.1;0.2] [0.3;0.1], [0.5;0.6] [0.8;0.9], [0.5;0.1]};
Here the network's weight and bias values are extracted.
X = getx(net);
Here we calculate the network's combined outputs Ac, and other
signals described above..
[perf,El,Ac,N,BZ,IWZ,LWZ] = calcperf(net,X,Pd,Tl,Ai,1,5)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:26
6299 bytes
CLIPTR Clip training record to the final number of epochs.
Syntax
tr = cliptr(tr,epochs)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:17:54
569 bytes
COMBVEC Create all combinations of vectors.
Syntax
combvec(a1,a2,...)
Description
COMBVEC(A1,A2,...) takes any number of inputs,
A1 - Matrix of N1 (column) vectors.
A2 - Matrix of N2 (column) vectors.
and returns a matrix of (N1*N2*...) column vectors, where the columns
consist of all possibilities of A2 vectors, appended to
A1 vectors, etc.
Example
a1 = [1 2 3; 4 5 6];
a2 = [7 8; 9 10];
a3 = combvec(a1,a2)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:12
1275 bytes
COMPET Competitive transfer function.
Syntax
A = compet(N,FP)
dA_dN = compet('dn',N,A,FP)
INFO = compet(CODE)
Description
COMPET is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
COMPET(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns SxQ matrix A with a 1 in each column where
the same column of N has its maximum value, and 0 elsewhere.
COMPET('dn',N,A,FP) returns derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
COMPET('name') returns the name of this function.
COMPET('output',FP) returns the [min max] output range.
COMPET('active',FP) returns the [min max] active input range.
COMPET('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
COMPET('fpnames') returns the names of the function parameters.
COMPET('fpdefaults') returns the default function parameters.
Examples
Here we define a net input vector N, calculate the output,
and plot both with bar graphs.
n = [0; 1; -0.5; 0.5];
a = compet(n);
subplot(2,1,1), bar(n), ylabel('n')
subplot(2,1,2), bar(a), ylabel('a')
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'compet';
See also SIM, SOFTMAX.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:06
2747 bytes
COMPETSL Competitive transfer function used by SIMULINK.
Syntax
a = competsl(n)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:17:48
468 bytes
CON2SEQ Convert concurrent vectors to sequential vectors.
Syntax
s = con2seq(b)
Description
The neural network toolbox arranges concurrent vectors
with a matrix, and sequential vectors with a cell array
(where the second index is the time step).
CON2SEQ and SEQ2CON allow concurrent vectors to be converted
to sequential vectors, and back again.
CON2SEQ(B) takes one input,
B - RxTS matrix.
and returns one output,
S - 1xTS cell array of Rx1 vectors.
CON2SEQ(B,TS) can also convert multiple batches,
B - Nx1 cell array of matrices with M*TS columns.
TS - Time steps.
and will return,
S - NxTS cell array of matrices with M columns.
Example
Here a batch of three values is converted to a
sequence.
p1 = [1 4 2]
p2 = con2seq(p1)
Here two batches of vectors are converted to a
two sequences with two time steps.
p1 = {[1 3 4 5; 1 1 7 4]; [7 3 4 4; 6 9 4 1]}
p2 = con2seq(p1,2)
See also SEQ2CON, CONCUR.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:14
1740 bytes
CONCUR Create concurrent bias vectors.
Syntax
concur(b,q)
Description
CONCUR(B,Q)
B - Nlx1 cell array of bias vectors.
Q - Concurrent size.
Returns an SxB matrix of copies of B (or Nlx1 cell array of matrices).
Examples
Here CONCUR creates three copies of a bias vector.
b = [1; 3; 2; -1];
concur(b,3)
Network Use
To calculate a layer's net input, the layer's weighted
inputs must be combined with its biases. The following
expression calculates the net input for a layer with
the NETSUM net input function, two input weights, and
a bias:
n = netsum(z1,z2,b)
The above expression works if Z1, Z2, and B are all Sx1
vectors. However, if the network is being simulated by SIM
(or ADAPT or TRAIN) in response to Q concurrent vectors,
then Z1 and Z2 will be SxQ matrices. Before B can be
combined with Z1 and Z2 we must make Q copies of it.
n = netsum(z1,z2,concur(b,q))
See also NETSUM, NETPROD, SIM, SEQ2CON, CON2SEQ.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:14
1493 bytes
CONVWF Convolution weight function.
Syntax
Z = convwf(W,P)
dim = convwf('size',S,R,FP)
dp = convwf('dp',W,P,Z,FP)
dw = convwf('dw',W,P,Z,FP)
info = convwf(code)
Description
CONVWF is the convolution weight function. Weight functions
apply weights to an input to get weighted inputs.
CONVWF(code) returns information about this function.
These codes are defined:
'deriv' - Name of derivative function.
'fullderiv' - Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'pfullderiv' - Input: Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'wfullderiv' - Weight: Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
CONVWF('size',S,R,FP) takes the layer dimension S, input dimention R,
and function parameters, and returns the weight size.
CONVWF('dp',W,P,Z,FP) returns the derivative of Z with respect to P.
CONVWF('dw',W,P,Z,FP) returns the derivative of Z with respect to W.
Examples
Here we define a random weight matrix W and input vector P
and calculate the corresponding weighted input Z.
W = rand(4,1);
P = rand(8,1);
Z = convwf(W,P)
Network Use
To change a network so an input weight uses CONVWF set
NET.inputWeight{i,j}.weightFcn to 'convwf'. For a layer weight
set NET.inputWeight{i,j}.weightFcn to 'convwf'.
In either case, call SIM to simulate the network with CONVWF.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:20
3269 bytes
DIST Euclidean distance weight function.
Syntax
Z = dist(W,P,FP)
info = dist(code)
dim = dist('size',S,R,FP)
dp = dist('dp',W,P,Z,FP)
dw = dist('dw',W,P,Z,FP)
D = dist(pos)
Description
DIST is the Euclidean distance weight function. Weight
functions apply weights to an input to get weighted inputs.
DIST(W,P,FP) takes these inputs,
W - SxR weight matrix.
P - RxQ matrix of Q input (column) vectors.
FP - Row cell array of function parameters (optional, ignored).
and returns the SxQ matrix of vector distances.
DIST(code) returns information about this function.
These codes are defined:
'deriv' - Name of derivative function.
'fullderiv' - Full derivative = 1, linear derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
DIST('size',S,R,FP) takes the layer dimension S, input dimention R,
and function parameters, and returns the weight size [SxR].
DIST('dp',W,P,Z,FP) returns the derivative of Z with respect to P.
DIST('dw',W,P,Z,FP) returns the derivative of Z with respect to W.
DIST is also a layer distance function which can be used
to find the distances between neurons in a layer.
DIST(POS) takes one argument,
POS - NxS matrix of neuron positions.
and returns the SxS matrix of distances.
Examples
Here we define a random weight matrix W and input vector P
and calculate the corresponding weighted input Z.
W = rand(4,3);
P = rand(3,1);
Z = dist(W,P)
Here we define a random matrix of positions for 10 neurons
arranged in three dimensional space and find their distances.
pos = rand(3,10);
D = dist(pos)
Network Use
You can create a standard network that uses DIST
by calling NEWPNN or NEWGRNN.
To change a network so an input weight uses DIST set
NET.inputWeight{i,j}.weightFcn to 'dist. For a layer weight
set NET.inputWeight{i,j}.weightFcn to 'dist'.
To change a network so that a layer's topology uses DIST set
NET.layers{i}.distanceFcn to 'dist'.
In either case, call SIM to simulate the network with DIST.
See NEWPNN or NEWGRNN for simulation examples.
Algorithm
The Euclidean distance D between two vectors X and Y is:
D = sum((x-y).^2).^0.5
See also SIM, DOTPROD, NEGDIST, NORMPROD, MANDIST, LINKDIST.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:20
4796 bytes
DIVIDEVEC Divide problem vectors into training, validation and test vectors.
Syntax
[trainV,valV,testV] = dividevec(p,t,valPercent,testPercent)
Description
DIVIDEVEC is used to seperate a set of input and target data into
groups of vectors for training, validating network performance during
training so that training stops early if it attempts to overfit the training
data, and test data used for an independent measure of how the network
might be expected to perform on data it was not trained on.
DIVIDEVEC(P,T,valPercent,testPercent) takes the following inputs,
P - RxQ matrix of inputs, or cell array of input matices.
T - SxQ matrix of targets, or cell array of target matrices.
valPercent - Fraction of column vectors to use for validation.
testPercent - Fraction of column vectors to use for test.
and returns:
trainV.P, .T, .indices - Training vectors and their original indices
valV.P, .T, .indices - Validation vectors and their original indices
testV.P, .T, .indices - Test vectors and their original indices
Examples
Here 1000 3-element input and 2-element target vectors are created:
p = rands(3,1000);
t = [p(1,:).*p(2,:); p(2,:).*p(3,:)];
Here they are divided up into training, validation and test sets.
Validation and test sets contain 20% of the vectors each, leaving
60% of the vectors for training.
[trainV,valV,testV] = dividevec(p,t,0.20,0.20);
Now a network is created and trained with the data.
net = newff(minmax(p),[10 size(t,1)]);
net = train(net,trainV.P,trainV.T,[],[],valV,testV);
See also con2seq, seq2con.
ApplicationRoot\WavixIV\neural501
25-Jan-2006 19:49:20
3151 bytes
DNULLPF Derivative of null performance function.
DNULLPF('E',E,X,PERF)
E - Layer errors.
X - Vector of weight and bias values.
Returns zeros.
DNULLPF('X',E,X,PERF)
Returns zeros.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:18:06
624 bytes
DNULLTF Null transfer derivative function.
Syntax
dA_dN = dnulltf(N,A)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:18:12
416 bytes
DNULLWF Null weight derivative function.
Syntax
dZ_dP = dnullwf('p',W,P,Z)
dZ_dW = dnullwf('w',W,P,Z)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:18:22
569 bytes
DOTPROD Dot product weight function.
Syntax
Z = dotprod(W,P,FP)
info = dotprod(code)
dim = dotprod('size',S,R,FP)
dp = dotprod('dp',W,P,Z,FP)
dw = dotprod('dw',W,P,Z,FP)
Description
DOTPROD is the dot product weight function. Weight functions
apply weights to an input to get weighted inputs.
DOTPROD(W,P,FP) takes these inputs,
W - SxR weight matrix.
P - RxQ matrix of Q input (column) vectors.
FP - Row cell array of function parameters (optional, ignored).
and returns the SxQ dot product of W and P.
DOTPROD(code) returns information about this function.
These codes are defined:
'deriv' - Name of derivative function (for ver. 4).
'pfullderiv' - Input: Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'wfullderiv' - Weight: Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
DOTPROD('size',S,R,FP) takes the layer dimension S, input dimention R,
and function parameters, and returns the weight size [SxR].
DOTPROD('dp',W,P,Z,FP) returns the derivative of Z with respect to P.
DOTPROD('dw',W,P,Z,FP) returns the derivative of Z with respect to W.
Examples
Here we define a random weight matrix W and input vector P
and calculate the corresponding weighted input Z.
W = rand(4,3);
P = rand(3,1);
Z = dotprod(W,P)
Network Use
You can create a standard network that uses DOTPROD
by calling NEWP or NEWLIN.
To change a network so an input weight uses DOTPROD set
NET.inputWeight{i,j}.weightFcn to 'dotprod. For a layer weight
set NET.inputWeight{i,j}.weightFcn to 'dotprod.
In either case, call SIM to simulate the network with DOTPROD.
See NEWP and NEWLIN for simulation examples.
See also SIM, DDOTPROD, DIST, NEGDIST, NORMPROD.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:22
3422 bytes
ERRSURF Error surface of single input neuron.
Syntax
E = errsurf(P,T,WV,BV,F)
Description
ERRSURF(P,T,WV,BV,F) takes these arguments,
P - 1xQ matrix of input vectors.
T - 1xQ matrix of target vectors.
WV - Row vector of values of W.
BV - Row vector of values of B.
F - Transfer function (string).
and returns a matrix of error values over WV and BV.
Examples
p = [-6.0 -6.1 -4.1 -4.0 +4.0 +4.1 +6.0 +6.1];
t = [+0.0 +0.0 +.97 +.99 +.01 +.03 +1.0 +1.0];
wv = -1:.1:1; bv = -2.5:.25:2.5;
es = errsurf(p,t,wv,bv,'logsig');
plotes(wv,bv,es,[60 30])
See also PLOTES.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:18:54
1074 bytes
FIXUNKNOWNS Processes matrix rows with unknown values.
Syntax
[y,ps] = fixunknowns(x)
[y,ps] = fixunknowns(x,fp)
y = fixunknowns('apply',x,ps)
x = fixunknowns('reverse',y,ps)
dx_dy = fixunknowns('dx',x,y,ps)
dx_dy = fixunknowns('dx',x,[],ps)
name = fixunknowns('name');
fp = fixunkowns('pdefaults');
names = fixunknowns('pnames');
fixunknowns('pcheck',fp);
Description
FIXUNKNOWNS processes matrixes by replacing each row containing
unknown values (represented by NaN) with two rows of information.
The first row contains the origonal row, with NaN values replaced
by the row's mean. The second row contains 1 and 0 values, indicating
which values in the first row were known or unknown, respectively.
FIXUNKNOWNS(X) takes these inputs,
X - Single NxQ matrix or a 1xTS row cell array of NxQ matrices.
and returns,
Y - Each MxQ matrix with M-N rows added (optional).
PS - Process settings, to allow consistent processing of values.
FIXUNKNOWNS(X,FP) takes empty struct FP of parameters.
FIXUNKNOWNS('apply',X,PS) returns Y, given X and settings PS.
FIXUNKNOWNS('reverse',Y,PS) returns X, given Y and settings PS.
FIXUNKNOWNS('dx',X,Y,PS) returns MxNxQ derivative of Y w/respect to X.
FIXUNKNOWNS('dx',X,[],PS) returns the derivative, less efficiently.
FIXUNKNOWNS('name') returns the name of this process method.
FIXUNKNOWNS('pdefaults') returns default process parameter structure.
FIXUNKNOWNS('pdesc') returns the process parameter descriptions.
FIXUNKNOWNS('pcheck',fp) throws an error if any parameter is illegal.
Examples
Here is how to format a matrix with a mixture of known and
unknown values in its second row.
x1 = [1 2 3 4; 4 NaN 6 5; NaN 2 3 NaN]
[y1,ps] = fixunknowns(x1)
Next, we apply the same processing settings to new values.
x2 = [4 5 3 2; NaN 9 NaN 2; 4 9 5 2]
y2 = fixunknowns('apply',x2,ps)
Here we reverse the processing of y1 to get x1 again.
x1_again = fixunknowns('reverse',y1,ps)
See also MAPMINMAX, MAPSTD, PROCESSPCA, REMOVECONSTANTROWS
ApplicationRoot\WavixIV\neural501
16-Jun-2006 21:37:00
4965 bytes
FORMGX Form bias and weights into single vector.
Syntax
gX = formgx(net,gB,gIW,gLW)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code which calls this function.
See also GETX, SETX.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:18:04
962 bytes
FORMX Form bias and weights into single vector.
Syntax
X = formx(net,B,IW,LW)
Description
This function takes weight matrices and bias vectors
for a network and reshapes them into a single vector.
X = FORMX(NET,B,IW,LW) takes these arguments,
NET - Neural network.
B - Nlx1 cell array of bias vectors.
IW - NlxNi cell array of input weight matrices.
LW - NlxNl cell array of layer weight matrices.
and returns,
X - Vector of weight and bias values.
Examples
Here we create a network with a 2-element input, and one
layer of 3 neurons.
net = newff([0 1; -1 1],[3]);
We can get view its weight matrices and bias vectors as follows:
b = net.b
iw = net.iw
lw = net.lw
We can put these values into a single vector as follows:
x = formx(net,net.b,net.iw,net.lw)
See also GETX, SETX.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:18:28
1665 bytes
GETX Get all network weight and bias values as a single vector.
Syntax
X = getx(net)
Description
This function gets a networks weight and biases as
a vector of values.
X = GETX(NET)
NET - Neural network.
X - Vector of weight and bias values.
Examples
Here we create a network with a 2-element input, and one
layer of 3 neurons.
net = newff([0 1; -1 1],[3]);
We can get its weight and bias values as follows:
net.iw{1,1}
net.b{1}
We can get these values as a single vector as follows:
x = getx(net);
See also SETX, FORMX.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:18:10
1370 bytes
GRIDTOP Grid layer topology function.
Syntax
pos = gridtop(dim1,dim2,...,dimN)
Description
GRIDTOP calculates neuron positions for layers whose
neurons are arranged in an N dimensional grid.
GRIDTOP(DIM1,DIM2,...,DIMN) takes N arguments,
DIMi - Length of layer in dimension i.
and returns an NxS matrix of N coordinate vectors
where S is the product of DIM1*DIM2*...*DIMN.
Examples
This code creates and displays a two-dimensional layer
with 40 neurons arranged in a 8x5 grid.
pos = gridtop(8,5); plotsom(pos)
This code plots the connections between the same neurons,
but shows each neuron at the location of its weight vector.
The weights are generated randomly so the layer is
very unorganized as is evident in the following plot.
W = rands(40,2); plotsom(W,dist(pos))
See also HEXTOP, RANDTOP.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:48
1413 bytes
HARDLIM Hard limit transfer function.
Syntax
A = hardlim(N,FP)
dA_dN = hardlim('dn',N,A,FP)
INFO = hardlim(CODE)
Description
HARDLIM is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
HARDLIM(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, the SxQ boolean matrix with 1's where N >= 0.
HARDLIM('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
HARDLIM('name') returns the name of this function.
HARDLIM('output',FP) returns the [min max] output range.
HARDLIM('active',FP) returns the [min max] active input range.
HARDLIM('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
HARDLIM('fpnames') returns the names of the function parameters.
HARDLIM('fpdefaults') returns the default function parameters.
Examples
Here is how to create a plot of the HARDLIM transfer function.
n = -5:0.1:5;
a = hardlim(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'hardlim';
Algorithm
hardlim(n) = 1, if n >= 0
0, otherwise
See also SIM, HARDLIMS.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:08
2656 bytes
HARDLIMS Symmetric hard limit transfer function.
Syntax
A = hardlims(N,FP)
dA_dN = hardlims('dn',N,A,FP)
INFO = hardlims(CODE)
Description
HARDLIMS is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
HARDLIMS(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, the SxQ +1/-1 matrix with +1's where N >= 0.
HARDLIMS('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
HARDLIMS('name') returns the name of this function.
HARDLIMS('output',FP) returns the [min max] output range.
HARDLIMS('active',FP) returns the [min max] active input range.
HARDLIMS('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
HARDLIMS('fpnames') returns the names of the function parameters.
HARDLIMS('fpdefaults') returns the default function parameters.
Examples
Here is how to create a plot of the HARDLIMS transfer function.
n = -5:0.1:5;
a = hardlims(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'hardlims';
Algorithm
hardlims(n) = 1, if n >= 0
-1, otherwise
See also SIM, HARDLIMS.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:08
2700 bytes
HEXTOP Hexagonal layer topology function.
Syntax
pos = hextop(dim1,dim2,...,dimN)
Description
HEXTOP calculates the neuron positions for layers whose
neurons are arranged in a N dimensional hexagonal pattern.
HEXTOP(DIM1,DIM2,...,DIMN) takes N arguments,
DIMi - Length of layer in dimension i.
and returns an NxS matrix of N coordinate vectors
where S is the product of DIM1*DIM2*...*DIMN.
Examples
This code creates and displays a two-dimensional layer
with 40 neurons arranged in a 8x5 hexagonal pattern.
pos = hextop(8,5); plotsom(pos)
This code plots the connections between the same neurons,
but shows each neuron at the location of its weight vector.
The weights are generated randomly so that the layer is
very disorganized, as is evident in the following plot.
W = rands(40,2); plotsom(W,dist(pos))
See also GRIDTOP, RANDTOP.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:48
1710 bytes
HINTONW Hinton graph of weight matrix.
Syntax
hintonw(W,maxw,minw)
Description
HINTONW(W,MAXW,MINW) takes these inputs,
W - SxR weight matrix
MAXW - Maximum weight, default = max(max(abs(W))).
MINW - Minimum weight, default = M1/100.
and displays a weight matrix represented as a grid of squares.
Each square's AREA represents a weight's magnitude.
Each square's COLOR represents a weight's sign.
RED for negative weights, GREEN for positive.
Examples
W = rands(4,5);
hintonw(W)
See also HINTONWB.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:24
2212 bytes
HINTONWB Hinton graph of weight matrix and bias vector.
Syntax
hintonwb(W,b,maxw,minw)
Description
HINTONWB(W,B,M1,M2)
W - SxR weight matrix
B - Sx1 bias vector.
MAXW - Maximum weight, default = max(max(abs(W))).
MINW - Minimum weight, default = M1/100.
and displays a weight matrix and a bias vector represented
as a grid of squares.
Each square's AREA represents a weight's magnitude.
Each square's COLOR represents a weight's sign.
RED for negative weights, GREEN for positive.
Examples
W = rands(4,5);
b = rands(4,1);
hintonwb(W,b)
See also HINTONW.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:26
2671 bytes
IND2VEC Convert indices to vectors.
Syntax
vec = ind2vec(ind)
Description
IND2VEC and VEC2IND allow indices to be represented
either by themselves, or as vectors containing a 1 in the
row of the index they represent.
IND2VEC(IND) takes one argument,
IND - Row vector of indices.
and returns sparse matrix of vectors, with one 1 in
each column, as indicated by IND.
Examples
Here four indices are defined and converted to vector
representation.
ind = [1 3 2 3]
vec = ind2vec(ind)
See also VEC2IND.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:16
802 bytes
INITCON Conscience bias initialization function.
Syntax
b = initcon(s,pr);
Description
INITCON is a bias initialization function that initializes
biases for learning with the LEARNCON learning function.
INITCON(S,PR) takes two arguments
S - Number of rows (neurons).
PR - Rx2 matrix of R = [Pmin Pmax], default = [1 1].
and returns an Sx1 bias vector.
Note that for biases, R is always 1. INITCON could
also be used to initialize weights, but it is not
recommended for that purpose.
Examples
Here initial bias values are calculated a 5 neuron layer.
b = initcon(5)
Network Use
You can create a standard network that uses INITCON to initialize
weights by calling NEWC.
To prepare the bias of layer i of a custom network
to initialize with INITCON:
1) Set NET.initFcn to 'initlay'.
(NET.initParam will automatically become INITLAY's default parameters.)
2) Set NET.layers{i}.initFcn to 'initwb'.
3) Set NET.biases{i}.initFcn to 'initcon'.
To initialize the network call INIT.
See NEWC for initialization examples.
Algorithm
LEARNCON updates biases so that each bias value b(i) is
a function of the average output c(i) of the neuron i associated
with the bias.
INITCON gets initial bias values by assuming that each
neuron has responded to equal numbers of vectors in the "past".
See also INITWB, INITLAY, INIT, LEARNCON.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:26
1837 bytes
INITLAY Layer-by-layer network initialization function.
Syntax
net = initlay(net)
info = initlay(code)
Description
INITLAY is a network initialization function which
initializes each layer i according to its own initialization
function NET.layers{i}.initFcn.
INITLAY(NET) takes:
NET - Neural network.
and returns the network with each layer updated.
INITLAY(CODE) return useful information for each CODE string:
'pnames' - Names of initialization parameters.
'pdefaults' - Default initialization parameters.
INITLAY does not have any initialization parameters.
Network Use
You can create a standard network that uses INITLAY by calling
NEWP, NEWLIN, NEWFF, NEWCF, and many other new network functions.
To prepare a custom network to be initialized with INITLAY:
1) Set NET.initFcn to 'initlay'.
(This will set NET.initParam to the empty matrix [] since
INITLAY has no initialization parameters.)
2) Set each NET.layers{i}.initFcn to a layer initialization function.
(Examples of such functions are INITWB and INITNW).
To initialize the network call INIT.
See NEWP and NEWLIN for initialization examples.
Algorithm
The weights and biases of each layer i are initialized according
to NET.layers{i}.initFcn.
See also INITWB, INITNW, INIT.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:26
1941 bytes
INITNW Nguyen-Widrow layer initialization function.
Syntax
net = initnw(net,i)
Description
INITNW is a layer initialization function which initializes
a layer's weights and biases according to the Nguyen-Widrow
initialization algorithm. This algorithm chooses values in order
to distribute the active region of each neuron in the layer
evenly across the layer's input space.
INITNW(NET,i) takes two arguments,
NET - Neural network.
i - Index of a layer.
and returns the network with layer i's weights and biases updated.
Network Use
You can create a standard network that uses INITNW by calling
NEWFF or NEWCF.
To prepare a custom network to be initialized with INITNW:
1) Set NET.initFcn to 'initlay'.
(This will set NET.initParam to the empty matrix [] since
INITLAY has no initialization parameters.)
2) Set NET.layers{i}.initFcn to 'initnw'.
To initialize the network call INIT.
See NEWFF and NEWCF for training examples.
Algorithm
The Nguyen-Widrow method generates initial weight and bias
values for a layer so that the active regions of the layer's
neurons will be distributed roughly evenly over the input space.
Advantages over purely random weights and biases are:
(1) Few neurons are wasted (since all the neurons are in the input space).
(2) Training works faster (since each area of the input space has neurons).
The Nguyen-Widrow method can only be applied to layers...
...with a bias,
...with weights whose "weightFcn" is DOTPROD,
...with "netInputFcn" set to NETSUM.
If these conditions are not met then INITNW uses RANDS to
initialize the layer's weights and biases.
See also INITLAY, INITWB, INIT.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:28
7342 bytes
INITWB By-weight-and-bias layer initialization function.
Syntax
net = initwb(net,i)
Description
INITWB is a layer initialization function which initializes
a layer's weights and biases according to their own initialization
functions.
INITWB(NET,i) takes two arguments,
NET - Neural network.
i - Index of a layer.
and returns the network with layer i's weights and biases updated.
Network Use
You can create a standard network that uses INITWB by calling
NEWP or NEWLIN.
To prepare a custom network to be initialized with INITWB:
1) Set NET.initFcn to 'initlay'.
(This will set NET.initParam to the empty matrix [] since
INITLAY has no initialization parameters.)
2) Set NET.layers{i}.initFcn to 'initwb'.
3) Set each NET.inputWeights{i,j}.initFcn to a weight initialization function.
Set each NET.layerWeights{i,j}.initFcn to a weight initialization function.
Set each NET.biases{i}.initFcn to a bias initialization function.
(Examples of such functions are RANDS and MIDPOINT.)
To initialize the network call INIT.
See NEWP and NEWLIN for training examples.
Algorithm
Each weight (bias) in layer i is set to new values calculated
according to its weight (bias) initialization function.
See also INITNW, INITLAY, INIT.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:28
3050 bytes
INITZERO Zero weight/bias initialization function.
Syntax
W = initzero(S,PR)
b = initzero(S,[1 1])
Description
INITZERO(S,PR) takes these arguments,
S - Number of rows (neurons).
PR - Rx2 matrix of input value ranges = [Pmin Pmax].
and returns an SxR weight matrix of zeros.
INITZERO(S,[1 1])
returns an Sx1 bias vector of zeros.
Examples
Here initial weights and biases are calculated for
a layer with two inputs ranging over [0 1] and [-2 2],
and 4 neurons.
W = initzero(5,[0 1; -2 2])
b = initzero(5,[1 1])
Network Use
You can create a standard network that uses INITZERO to initialize
its weights by calling NEWP or NEWLIN.
To prepare the weights and the bias of layer i of a custom network
to be initialized with MIDPOINT:
1) Set NET.initFcn to 'initlay'.
(NET.initParam will automatically become INITLAY's default parameters.)
2) Set NET.layers{i}.initFcn to 'initwb'.
3) Set each NET.inputWeights{i,j}.initFcn to 'initzero'.
Set each NET.layerWeights{i,j}.initFcn to 'initzero';
Set each NET.biases{i}.initFcn to 'initzero';
To initialize the network call INIT.
See NEWP or NEWLIN for initialization examples.
See also INITWB, INITLAY, INIT.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:30
1596 bytes
LEARNCON Conscience bias learning function.
Syntax
[dB,LS] = learncon(B,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learncon(code)
Description
LEARNCON is the conscience bias learning function
used to increase the net input to neurons which
have the lowest average output until each neuron
responds roughly an equal percentage of the time.
LEARNCON(B,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
B - Sx1 bias vector.
P - 1xQ ones vector.
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns
dB - Sx1 weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNCON's learning parameter,
shown here with its default value.
LP.lr - 0.001 - Learning rate
LEARNCON(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
NNT 2.0 compatibility: The LP.lr described above equals
1 minus the bias time constant used by TRAINC in NNT 2.0.
Examples
Here we define a random output A, and bias vector W for a
layer with 3 neurons. We also define the learning rate LR.
a = rand(3,1);
b = rand(3,1);
lp.lr = 0.5;
Since LEARNCON only needs these values to calculate a bias
change (see Algorithm below), we will use them to do so.
dW = learncon(b,[],[],[],a,[],[],[],[],[],lp,[])
Network Use
To prepare the bias of layer i of a custom network
to learn with LEARNCON:
1) Set NET.trainFcn to 'trainr'.
(NET.trainParam will automatically become TRAINR's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set NET.inputWeights{i}.learnFcn to 'learncon'.
Set each NET.layerWeights{i,j}.learnFcn to 'learncon'.
(Each weight learning parameter property will automatically
be set to LEARNCON's default parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (or NET.adaptParam) properties as desired.
2) Call TRAIN (or ADAPT).
Algorithm
LEARNCON calculates the bias change db for a given neuron
by first updating each neuron's "conscience", i.e. the
running average of its output:
c = (1-lr)*c + lr*a
The conscience is then used to compute a bias for the
neuron that is greatest for smaller conscience values.
b = exp(1-log(c)) - b
(Note that LEARNCON is able to recover C each time it
is called from the bias values.)
See also LEARNK, LEARNOS, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:34
4009 bytes
LEARNGD Gradient descent weight/bias learning function.
Syntax
[dW,LS] = learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
[db,LS] = learngd(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
info = learngd(code)
Description
LEARNGD is the gradient descent weight/bias learning function.
LEARNGD(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNGD's learning parameter
shown here with its default value.
LP.lr - 0.01 - Learning rate
LEARNGD(CODE) return useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random gradient gW for a weight going
to a layer with 3 neurons, from an input with 2 elements.
We also define a learning rate of 0.5.
gW = rand(3,2);
lp.lr = 0.5;
Since LEARNGD only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learngd([],[],[],[],[],[],[],gW,[],[],lp,[])
Network Use
You can create a standard network that uses LEARNGD with NEWFF,
NEWCF, or NEWELM.
To prepare the weights and the bias of layer i of a custom network
to adapt with LEARNGD:
1) Set NET.adaptFcn to 'trains'.
NET.adaptParam will automatically become TRAINS's default parameters.
2) Set each NET.inputWeights{i,j}.learnFcn to 'learngd'.
Set each NET.layerWeights{i,j}.learnFcn to 'learngd'.
Set NET.biases{i}.learnFcn to 'learngd'.
Each weight and bias learning parameter property will automatically
be set to LEARNGD's default parameters.
To allow the network to adapt:
1) Set NET.adaptParam properties to desired values.
2) Call ADAPT with the network.
See NEWFF or NEWCF for examples.
Algorithm
LEARNGD calculates the weight change dW for a given neuron from
the neuron's input P and error E, and the weight (or bias) learning
rate LR, according to the gradient descent:
dw = lr*gW
See also LEARNGDM, NEWFF, NEWCF, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:36
3502 bytes
LEARNGDM Gradient descent w/momentum weight/bias learning function.
Syntax
[dW,LS] = learngdm(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
[db,LS] = learngdm(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
info = learngdm(code)
Description
LEARNGDM is the gradient descent with momentum weight/bias
learning function.
LEARNGDM(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNGDM's learning parameters,
shown here with their default values.
LP.lr - 0.01 - Learning rate
LP.mc - 0.9 - Momentum constant
LEARNGDM(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random gradient G for a weight going
to a layer with 3 neurons, from an input with 2 elements.
We also define a learning rate of 0.5 and momentum constant
of 0.8;
gW = rand(3,2);
lp.lr = 0.5;
lp.mc = 0.8;
Since LEARNGDM only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
We will use the default initial learning state.
ls = [];
[dW,ls] = learngdm([],[],[],[],[],[],[],gW,[],[],lp,ls)
LEARNGDM returns the weight change and a new learning state.
Network Use
You can create a standard network that uses LEARNGD with NEWFF,
NEWCF, or NEWELM.
To prepare the weights and the bias of layer i of a custom network
to adapt with LEARNGDM:
1) Set NET.adaptFcn to 'trains'.
NET.adaptParam will automatically become TRAINS's default parameters.
2) Set each NET.inputWeights{i,j}.learnFcn to 'learngdm'.
Set each NET.layerWeights{i,j}.learnFcn to 'learngdm'.
Set NET.biases{i}.learnFcn to 'learngdm'.
Each weight and bias learning parameter property will automatically
be set to LEARNGDM's default parameters.
To allow the network to adapt:
1) Set NET.adaptParam properties to desired values.
2) Call ADAPT with the network.
See NEWFF or NEWCF for examples.
Algorithm
LEARNGDM calculates the weight change dW for a given neuron
from the neuron's input P and error E, the weight (or bias)
learning rate LR, and momentum constant MC, according to
gradient descent with momentum:
dW = mc*dWprev + (1-mc)*lr*gW
The previous weight change dWprev is stored and read
from the learning state LS.
See also LEARNGD, NEWFF, NEWCF, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:36
4145 bytes
LEARNH Hebb weight learning rule.
Syntax
[dW,LS] = learnh(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnh(code)
Description
LEARNH is the Hebb weight learning function.
LEARNH(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNH's learning parameter,
shown here with its default value.
LP.lr - 0.01 - Learning rate
LEARNH(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P and output A for a layer
with a 2-element input and 3 neurons. We also define the
learning rate LR.
p = rand(2,1);
a = rand(3,1);
lp.lr = 0.5;
Since LEARNH only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnh([],p,[],[],a,[],[],[],[],[],lp,[])
Network Use
To prepare the weights and the bias of layer i of a custom network
to learn with LEARNH:
1) Set NET.trainFcn to 'trainr'.
(NET.trainParam will automatically become TRAINR's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnh'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnh'.
(Each weight learning parameter property will automatically
be set to LEARNH's default parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (NET.adaptParam) properties to desired values.
2) Call TRAIN (ADAPT).
Algorithm
LEARNH calculates the weight change dW for a given neuron from the
neuron's input P, output A, and learning rate LR according to the
Hebb learning rule:
dw = lr*a*p'
See also LEARNHD, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:38
3478 bytes
LEARNHD Hebb with decay weight learning rule.
Syntax
[dW,LS] = learnhd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnhd(code)
Description
LEARNHD is the Hebb weight learning function.
LEARNHD(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNHD's learning parameters
shown here with default values.
LP.dr - 0.01 - Decay rate.
LP.lr - 0.1 - Learning rate
LEARNHD(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P, output A, and weights W
for a layer with a 2-element input and 3 neurons. We also
define the decay and learning rates.
p = rand(2,1);
a = rand(3,1);
w = rand(3,2);
lp.dr = 0.05;
lp.lr = 0.5;
Since LEARNHD only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnhd(w,p,[],[],a,[],[],[],[],[],lp,[])
Network Use
To prepare the weights and the bias of layer i of a custom network
to learn with LEARNHD:
1) Set NET.trainFcn to 'trainr'.
(NET.trainParam will automatically become TRAINR's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnhd'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnhd'.
(Each weight learning parameter property will automatically
be set to LEARNHD's default parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (NET.adaptParam) properties to desired values.
2) Call TRAIN (ADAPT).
Algorithm
LEARNHD calculates the weight change dW for a given neuron from the
neuron's input P, output A, decay rate DR, and learning rate LR
according to the Hebb with decay learning rule:
dw = lr*a*p' - dr*w
See also LEARNH, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:38
3672 bytes
LEARNIS Instar weight learning function.
Syntax
[dW,LS] = learnis(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnis(code)
Description
LEARNIS is the instar weight learning function.
LEARNIS(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNIS's learning parameter,
shown here with its default value.
LP.lr - 0.01 - Learning rate
LEARNIS(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P, output A, and weight matrix W
for a layer with a 2-element input and 3 neurons. We also define
the learning rate LR.
p = rand(2,1);
a = rand(3,1);
w = rand(3,2);
lp.lr = 0.5;
Since LEARNIS only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnis(w,p,[],[],a,[],[],[],[],[],lp,[])
Network Use
To prepare the weights and the bias of layer i of a custom network
so that it can learn with LEARNIS:
1) Set NET.trainFcn to 'trainr'.
(NET.trainParam will automatically become TRAINR's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnis'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnis'.
(Each weight learning parameter property will automatically
be set to LEARNIS's default parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (NET.adaptParam) properties to desired values.
2) Call TRAIN (ADAPT).
Algorithm
LEARNIS calculates the weight change dW for a given neuron from the
neuron's input P, output A, and learning rate LR according to the
instar learning rule:
dw = lr*a*(p'-w)
See also LEARNK, LEARNOS, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:40
3726 bytes
LEARNK Kohonen weight learning function.
Syntax
[dW,LS] = learnk(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnk(code)
Description
LEARNK is the Kohonen weight learning function.
LEARNK(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNK's learning parameter,
shown here with its default value.
LP.lr - 0.01 - Learning rate
LEARNK(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P, output A, and weight matrix W
for a layer with a 2-element input and 3 neurons. We also define
the learning rate LR.
p = rand(2,1);
a = rand(3,1);
w = rand(3,2);
lp.lr = 0.5;
Since LEARNK only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnk(w,p,[],[],a,[],[],[],[],[],lp,[])
Network Use
To prepare the weights of layer i of a custom network
to learn with LEARNK:
1) Set NET.trainFcn to 'trainr'.
(NET.trainParam will automatically become TRAINR's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnk'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnk'.
(Each weight learning parameter property will automatically
be set to LEARNK's default parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (or NET.adaptParam) properties as desired.
2) Call TRAIN (or ADAPT).
Algorithm
LEARNK calculates the weight change dW for a given neuron from
the neuron's input P, output A, and learning rate LR according
to the Kohenen learning rule:
dw = lr*(p'-w), if a ~= 0
= 0, otherwise
See also LEARNIS, LEARNOS, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:40
3729 bytes
LEARNLV1 LVQ1 weight learning function.
Syntax
[dW,LS] = learnlv1(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnlv1(code)
Description
LEARNLV1 is the LVQ1 weight learning function.
LEARNLV1(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR weight gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNLV1's learning parameter,
shown here with its default value.
LP.lr - 0.01 - Learning rate
LEARNLV1(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P, output A, weight matrix W, and
output gradient gA for a layer with a 2-element input and 3 neurons.
We also define the learning rate LR.
p = rand(2,1);
w = rand(3,2);
a = compet(negdist(w,p));
gA = [-1;1; 1];
lp.lr = 0.5;
Since LEARNLV1 only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnlv1(w,p,[],[],a,[],[],[],gA,[],lp,[])
Network Use
You can create a standard network that uses LEARNLV1 with NEWLVQ.
To prepare the weights of layer i of a custom network
to learn with LEARNLV1:
1) Set NET.trainFcn to 'trainr'.
(NET.trainParam will automatically become TRAINR's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnlv1'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnlv1'.
(Each weight learning parameter property will automatically
be set to LEARNLV1's default parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (or NET.adaptParam) properties as desired.
2) Call TRAIN (or ADAPT).
Algorithm
LEARNLV1 calculates the weight change dW for a given neuron from
the neuron's input P, output A, output gradient gA and learning rate LR,
according to the LVQ1 rule, given i the index of the neuron whose
output a(i) is 1:
dw(i,:) = +lr*(p-w(i,:)) if gA(i) = 0
= -lr*(p-w(i,:)) if gA(i) = -1
See also LEARNLV2, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:42
3882 bytes
LEARNLV2 LVQ 2.1 weight learning function.
Syntax
[dW,LS] = learnlv2(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnlv2(code)
Description
LEARNLV2 is the LVQ2 weight learning function.
LEARNLV2(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR weight gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNLV1's learning parameter,
shown here with its default value.
LP.lr - 0.01 - Learning rate
LP.window - 0.25 - Window size (0 to 1, typically 0.2 to 0.3).
LEARNLV2(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a sample input P, output A, weight matrix W, and
output gradient gA for a layer with a 2-element input and 3 neurons.
We also define the learning rate LR.
p = [0;1];
w = [-1 1; 1 0; 1 1];
n = negdist(w,p);
a = compet(n);
gA = [-1;1;1];
lp.lr = 0.5;
lp.window = 0.25;
Since LEARNLV2 only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnlv2(w,p,[],n,a,[],[],[],gA,[],lp,[])
Network Use
LEARNLV2 should only be used to train networks which have already
been trained with LEARNLV1.
You can create a standard network that uses LEARNLV2 with NEWLVQ.
To prepare the weights of layer i of a custom network, or a
network which has been trained with LEARNLV1, to learn with LEARNLV2,
do the following:
1) Set NET.trainFcn to 'trainr'.
(NET.trainParam will automatically become TRAINR's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnlv2'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnlv2'.
(Each weight learning parameter property will automatically
be set to LEARNLV2's default parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (or NET.adaptParam) properties as desired.
2) Call TRAIN (or ADAPT).
Algorithm
LEARNLV2 implements Learning Vector Quantization 2.1 which works as
follows. For each presentation examine the winning neuron k1 and the
runner up neuron k2. If one of them is in the correct class and the
the other is not, then indicate the one that is incorrect as neuron i,
and the one that is correct as neuron j. Also assign the distance
from neuron k1 to the input as d1, and the distance from neuron k2
to the input as k2.
If the ratio of distances falls into a window as follows,
min(di/dj, dj/di) > (1-window)/(1+window)
then move the incorrect neuron i away from the input vector, and
move the correct neuron j toward the input according to:
dw(i,:) = - lp.lr*(p'-w(i,:))
dw(j,:) = + lp.lr*(p'-w(j,:))
See also LEARNLV1, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:42
5419 bytes
LEARNOS Outstar weight learning function.
Syntax
[dW,LS] = learnos(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnos(code)
Description
LEARNOS is the outstar weight learning function.
LEARNOS(W,P,Z,N,A,T,E,G,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNOS's learning parameter,
shown here with its default value.
LP.lr - 0.01 - Learning rate
LEARNOS(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P, output A, and weight matrix W
for a layer with a 2-element input and 3 neurons. We also define
the learning rate LR.
p = rand(2,1);
a = rand(3,1);
w = rand(3,2);
lp.lr = 0.5;
Since LEARNOS only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnos(w,p,[],[],a,[],[],[],[],[],lp,[])
Network Use
To prepare the weights and the bias of layer i of a custom network
to learn with LEARNOS:
1) Set NET.trainFcn to 'trainr'.
(NET.trainParam will automatically become TRAINR's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnos'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnos'.
(Each weight learning parameter property will automatically
be set to LEARNOS's default parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (NET.adaptParam) properties to desired values.
2) Call TRAIN (ADAPT).
Algorithm
LEARNOS calculates the weight change dW for a given neuron
from the neuron's input P, output A, and learning rate LR
according to the outstar learning rule:
dw = lr*(a-w)*p'
See also LEARNIS, LEARNK, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:44
3704 bytes
LEARNP Perceptron weight/bias learning function.
Syntax
[dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
[db,LS] = learnp(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnp(code)
Description
LEARNP is the perceptron weight/bias learning function.
LEARNP(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or b, an Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
LEARNP(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P and error E to a layer
with a 2-element input and 3 neurons.
p = rand(2,1);
e = rand(3,1);
Since LEARNP only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnp([],p,[],[],[],[],e,[],[],[],[],[])
Network Use
You can create a standard network that uses LEARNP with NEWP.
To prepare the weights and the bias of layer i of a custom network
to learn with LEARNP:
1) Set NET.trainFcn to 'trainb'.
(NET.trainParam will automatically become TRAINB's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnp'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnp'.
Set NET.biases{i}.learnFcn to 'learnp'.
(Each weight and bias learning parameter property will automatically
become the empty matrix since LEARNP has no learning parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (NET.adaptParam) properties to desired values.
2) Call TRAIN (ADAPT).
See NEWP for adaption and training examples.
Algorithm
LEARNP calculates the weight change dW for a given neuron from the
neuron's input P and error E according to the perceptron learning rule:
dw = 0, if e = 0
= p', if e = 1
= -p', if e = -1
This can be summarized as:
dw = e*p'
See also LEARNPN, NEWP, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:44
3956 bytes
LEARNPN Normalized perceptron weight/bias learning function.
Syntax
[dW,LS] = learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnpn(code)
Description
LEARNPN is a weight/bias learning function. It can result
in faster learning than LEARNP when input vectors have
widely varying magnitudes.
LEARNPN(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
LEARNPN(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P and error E to a layer
with a 2-element input and 3 neurons.
p = rand(2,1);
e = rand(3,1);
Since LEARNPN only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnpn([],p,[],[],[],[],e,[],[],[],[],[])
Network Use
You can create a standard network that uses LEARNPN with NEWP.
To prepare the weights and the bias of layer i of a custom network
to learn with LEARNPN:
1) Set NET.trainFcn to 'trainb'.
NET.trainParam will automatically become TRAINB's default parameters.
2) Set NET.adaptFcn to 'trains'.
NET.adaptParam will automatically become TRAINS's default parameters.
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnpn'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnpn'.
Set NET.biases{i}.learnFcn to 'learnpn'.
Each weight and bias learning parameter property will automatically
become the empty matrix since LEARNPN has no learning parameters.
To train the network (or enable it to adapt):
1) Set NET.trainParam (NET.adaptParam) properties to desired values.
2) Call TRAIN (ADAPT).
See NEWP for adaption and training examples.
Algorithm
LEARNPN calculates the weight change dW for a given neuron from the
neuron's input P and error E according to the normalized perceptron
learning rule:
pn = p / sqrt(1 + p(1)^2 + p(2)^2) + ... + p(R)^2)
dw = 0, if e = 0
= pn', if e = 1
= -pn', if e = -1
The expression for dW can be summarized as:
dw = e*pn'
See also LEARNP, NEWP, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:46
3943 bytes
LEARNSOM Self-organizing map weight learning function.
Syntax
[dW,LS] = learnk(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnk(code)
Description
LEARNSOM is the self-organizing map weight learning function.
LEARNSOM(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNSOM's learning parameter,
shown here with its default value.
LP.order_lr - 0.9 - Ordering phase learning rate.
LP.order_steps - 1000 - Ordering phase steps.
LP.tune_lr - 0.02 - Tuning phase learning rate.
LP.tune_nd - 1 - Tuning phase neighborhood distance.
LEARNSOM(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P, output A, and weight matrix W,
for a layer with a 2-element input and 6 neurons. We also calculate
the positions and distances for the neurons which are arranged in a
2x3 hexagonal pattern. Then we define the four learning parameters.
p = rand(2,1);
a = rand(6,1);
w = rand(6,2);
pos = hextop(2,3);
d = linkdist(pos);
lp.order_lr = 0.9;
lp.order_steps = 1000;
lp.tune_lr = 0.02;
lp.tune_nd = 1;
Since LEARNSOM only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
ls = [];
[dW,ls] = learnsom(w,p,[],[],a,[],[],[],[],d,lp,ls)
Network Use
You can create a standard network that uses LEARNSOM with NEWSOM.
To prepare the weights of layer i of a custom network
to learn with LEARNSOM:
1) Set NET.trainFcn to 'trainr'.
(NET.trainParam will automatically become TRAINR's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnsom'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnsom'.
(Each weight learning parameter property will automatically
be set to LEARNSOM's default parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (or NET.adaptParam) properties as desired.
2) Call TRAIN (or ADAPT).
Algorithm
LEARNSOM calculates the weight change dW for a given neuron from
the neuron's input P, activation A2, and learning rate LR:
dw = lr*a2*(p'-w)
where the activation A2 is found from the layer output A and
neuron distances D and the current neighborhood size ND:
a2(i,q) = 1, if a(i,q) = 1
= 0.5, if a(j,q) = 1 and D(i,j) <= nd
= 0, otherwise
The learning rate LR and neighborhood size NS are altered
through two phases: an ordering phase and a tuning phase.
The ordering phase lasts as many steps as LP.order_steps.
During this phase LR is adjusted from LP.order_lr down to
LP.tune_lr, and ND is adjusted from the maximum neuron distance
down to 1. It is during this phase that neuron weights are expected
to order themselves in the input space consistent with
the associated neuron positions.
During the tuning phase LR decreases slowly from LP.tune_lr and
ND is always set to LP.tune_nd. During this phase the weights are
expected to spread out relatively evenly over the input space while
retaining their topological order found during the ordering phase.
See also ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:46
5646 bytes
LEARNWH Widrow-Hoff weight/bias learning function.
Syntax
[dW,LS] = learnwh(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
[db,LS] = learnwh(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnwh(code)
Description
LEARNWH is the Widrow-Hoff weight/bias learning function,
and is also known as the delta or least mean squared (LMS) rule.
LEARNWH(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or b, an Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
Learning occurs according to LEARNWH's learning parameter,
shown here with its default value.
LP.lr - 0.01 - Learning rate
LEARNWH(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P and error E to a layer
with a 2-element input and 3 neurons. We also define the
learning rate LR learning parameter.
p = rand(2,1);
e = rand(3,1);
lp.lr = 0.5;
Since LEARNWH only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnwh([],p,[],[],[],[],e,[],[],[],lp,[])
Network Use
You can create a standard network that uses LEARNWH with NEWLIN.
To prepare the weights and the bias of layer i of a custom network
to learn with LEARNWH:
1) Set NET.trainFcn to 'trainb'.
NET.trainParam will automatically become TRAINB's default parameters.
2) Set NET.adaptFcn to 'trains'.
NET.adaptParam will automatically become TRAINS's default parameters.
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnwh'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnwh'.
Set NET.biases{i}.learnFcn to 'learnwh'.
Each weight and bias learning parameter property will automatically
be set to LEARNWH's default parameters.
To train the network (or enable it to adapt):
1) Set NET.trainParam (NET.adaptParam) properties to desired values.
2) Call TRAIN (ADAPT).
See NEWLIN for adaption and training examples.
Algorithm
LEARNWH calculates the weight change dW for a given neuron from the
neuron's input P and error E, and the weight (or bias) learning
rate LR, according to the Widrow-Hoff learning rule:
dw = lr*e*pn'
See also NEWLIN, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:48
3978 bytes
LINKDIST Link distance function.
Syntax
d = linkdist(pos);
Description
LINKDIST is a layer distance function used to find
the distances between the layer's neurons given their
positions.
LINKDIST(pos) takes one argument,
POS - NxS matrix of neuron positions.
and returns the SxS matrix of distances.
Examples
Here we define a random matrix of positions for 10 neurons
arranged in three dimensional space and find their distances.
pos = rand(3,10);
D = linkdist(pos)
Network Use
You can create a standard network that uses LINKDIST
as a distance function by calling NEWSOM.
To change a network so a layer's topology uses LINKDIST set
NET.layers{i}.distanceFcn to 'linkdist'.
In either case, call SIM to simulate the network with DIST.
See NEWSOM for training and adaption examples.
Algorithm
The link distance D between two position vectors Pi and Pj
from a set of S vectors is:
Dij = 0, if i==j
= 1, if sum((Pi-Pj).^2).^0.5 is <= 1
= 2, if k exists, Dik = Dkj = 1
= 3, if k1, k2 exist, Dik1 = Dk1k2 = Dk2j = 1.
= N, if k1..kN exist, Dik1 = Dk1k2 = ...= DkNj = 1
= S, if none of the above conditions apply.
See also SIM, DIST, MANDIST.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:12
1831 bytes
LOGSIG Logarithmic sigmoid transfer function.
Syntax
A = logsig(N,FP)
dA_dN = logsig('dn',N,A,FP)
INFO = logsig(CODE)
Description
LOGSIG(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, the SxQ matrix of N's elements squashed into [0, 1].
LOGSIG('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
LOGSIG('name') returns the name of this function.
LOGSIG('output',FP) returns the [min max] output range.
LOGSIG('active',FP) returns the [min max] active input range.
LOGSIG('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
LOGSIG('fpnames') returns the names of the function parameters.
LOGSIG('fpdefaults') returns the default function parameters.
Examples
Here is code for creating a plot of the LOGSIG transfer function.
n = -5:0.1:5;
a = logsig(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'logsig';
Algorithm
logsig(n) = 1 / (1 + exp(-n))
See also SIM, DLOGSIG, TANSIG.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:10
2568 bytes
MAE Mean absolute error performance function.
Syntax
perf = mae(E,Y,X,FP)
dPerf_dy = mae('dy',E,Y,X,perf,FP);
dPerf_dx = mae('dx',E,Y,X,perf,FP);
info = mae(code)
Description
MAE is a network performance function. It measures network
performance as the mean of absolute errors.
MAE(E,Y,X,PP) takes E and optional function parameters,
E - Matrix or cell array of error vectors.
Y - Matrix or cell array of output vectors. (ignored).
X - Vector of all weight and bias values (ignored).
FP - Function parameters (ignored).
and returns the mean absolute error.
MAE('dy',E,Y,X,PERF,FP) returns derivative of PERF with respect to Y.
MAE('dx',E,Y,X,PERF,FP) returns derivative of PERF with respect to X.
MAE('name') returns the name of this function.
MAE('pnames') returns the name of this function.
MAE('pdefaults') returns the default function parameters.
Examples
Here a perceptron is created with a 1-element input ranging
from -10 to 10, and one neuron.
net = newp([-10 10],1);
Here the network is given a batch of inputs P. The error
is calculated by subtracting the output A from target T.
Then the mean absolute error is calculated.
p = [-10 -5 0 5 10];
t = [0 0 1 1 1];
y = sim(net,p)
e = t-y
perf = mae(e)
Note that MAE can be called with only one argument because
the other arguments are ignored. MAE supports those arguments
to conform to the standard performance function argument list.
Network Use
You can create a standard network that uses MAE with NEWP.
To prepare a custom network to be trained with MAE, set
NET.performFcn to 'mae'. This will automatically set
NET.performParam to the empty matrix [], as MAE has no
performance parameters.
In either case, calling TRAIN or ADAPT will result
in MAE being used to calculate performance.
See NEWP for examples.
See also MSE, MSEREG, DMAE.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:18
3366 bytes
MANDIST Manhattan distance weight function.
Syntax
Z = mandist(W,P,FP)
info = mandist(code)
dim = mandist('size',S,R,FP)
dp = mandist('dp',W,P,Z,FP)
dw = mandist('dw',W,P,Z,FP)
D = mandist(pos);
Description
MANDIST is the Manhattan distance weight function. Weight
functions apply weights to an input to get weighted inputs.
MANDIST(W,P,FP) takes these inputs,
W - SxR weight matrix.
P - RxQ matrix of Q input (column) vectors.
FP - Row cell array of function parameters (optional, ignored).
and returns the SxQ matrix of vector distances.
MANDIST(code) returns information about this function.
These codes are defined:
'deriv' - Name of derivative function.
'fullderiv' - Full derivative = 1, linear derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
MANDIST('size',S,R,FP) takes the layer dimension S, input dimention R,
and function parameters, and returns the weight size [SxR].
MANDIST('dp',W,P,Z,FP) returns the derivative of Z with respect to P.
MANDIST('size',S,R,FP) returns the derivative of Z with respect to W.
MANDIST is also a layer distance function which can be used
to find distances between neurons in a layer.
MANDIST(POS) takes one argument,
POS - S row matrix of neuron positions.
and returns the SxS matrix of distances.
Examples
Here we define a random weight matrix W and input vector P
and calculate the corresponding weighted input Z.
W = rand(4,3);
P = rand(3,1);
Z = mandist(W,P)
Here we define a random matrix of positions for 10 neurons
arranged in three dimensional space and then find their distances.
pos = rand(3,10);
D = mandist(pos)
Network Use
You can create a standard network that uses MANDIST
as a distance function by calling NEWSOM.
To change a network so an input weight uses MANDIST set
NET.inputWeight{i,j}.weightFcn to 'mandist. For a layer weight
set NET.inputWeight{i,j}.weightFcn to 'mandist'.
To change a network so a layer's topology uses MANDIST set
NET.layers{i}.distanceFcn to 'mandist'.
In either case, call SIM to simulate the network with DIST.
See NEWPNN or NEWGRNN for simulation examples.
Algorithm
The Manhattan distance D between two vectors X and Y is:
D = sum(abs(x-y))
See also SIM, DIST, LINKDIST.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:22
4694 bytes
MAPMINMAX Map matrix row minimum and maximum values to [-1 1].
Syntax
[y,ps] = mapminmax(x,ymin,ymax)
[y,ps] = mapminmax(x,fp)
y = mapminmax('apply',x,ps)
x = mapminmax('reverse',y,ps)
dx_dy = mapminmax('dx',x,y,ps)
dx_dy = mapminmax('dx',x,[],ps)
name = mapminmax('name');
fp = mapminmax('pdefaults');
names = mapminmax('pnames');
mapminmax('pcheck', fp);
Description
MAPMINMAX processes matrices by normalizing the minimum and maximum values
of each row to [YMIN, YMAX].
MAPMINMAX(X,YMIN,YMAX) takes X and optional parameters,
X - NxQ matrix or a 1xTS row cell array of NxQ matrices.
YMIN - Minimum value for each row of Y. (Default is -1)
YMAX - Maximum value for each row of Y. (Default is +1)
and returns,
Y - Each MxQ matrix (where M == N) (optional).
PS - Process settings, to allow consistent processing of values.
MAPMINMAX(X,FP) takes parameters as struct: FP.ymin, FP.ymax.
MAPMINMAX('apply',X,PS) returns Y, given X and settings PS.
MAPMINMAX('reverse',Y,PS) returns X, given Y and settings PS.
MAPMINMAX('dx',X,Y,PS) returns MxNxQ derivative of Y w/respect to X.
MAPMINMAX('dx',X,[],PS) returns the derivative, less efficiently.
MAPMINMAX('name') returns the name of this process method.
MAPMINMAX('pdefaults') returns default process parameter structure.
MAPMINMAX('pdesc') returns the process parameter descriptions.
MAPMINMAX('pcheck',fp) throws an error if any parameter is illegal.
Examples
Here is how to format a matrix so that the minimum and maximum
values of each row are mapped to default interval [-1,+1].
x1 = [1 2 4; 1 1 1; 3 2 2; 0 0 0]
[y1,ps] = mapminmax(x1)
Next, we apply the same processing settings to new values.
x2 = [5 2 3; 1 1 1; 6 7 3; 0 0 0]
y2 = mapminmax('apply',x2,ps)
Here we reverse the processing of y1 to get x1 again.
x1_again = mapminmax('reverse',y1,ps)
Algorithm
It is assumed that X has only finite real values, and that
the elements of each row are not all equal.
y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;
See also FIXUNKNOWNS, MAPSTD, PROCESSPCA, REMOVECONSTANTROWS
ApplicationRoot\WavixIV\neural501
16-Jun-2006 21:37:00
4793 bytes
MAPSTD Map matrix row means and deviations to standard values.
Syntax
[y,ps] = mapstd(ymean,ystd)
[y,ps] = mapstd(x,fp)
y = mapstd('apply',x,ps)
x = mapstd('reverse',y,ps)
dx_dy = mapstd('dx',x,y,ps)
dx_dy = mapstd('dx',x,[],ps)
name = mapstd('name');
fp = mapstd('pdefaults');
names = mapstd('pnames');
mapstd('pcheck',fp);
Description
MAPSTD processes matrices by tranforming the mean and standard
deviation of each row to YMEAN and YSTD.
MAPSTD(X,YMEAN,YSTD) takes X and optional parameters,
X - NxQ matrix or a 1xTS row cell array of NxQ matrices.
YMEAN - Mean value for each row of Y. (Default is 0)
YSTD - Standard deviation for each row of Y. (Default is 1)
and returns,
Y - Each MxQ matrix (where M == N) (optional).
PS - Process settings, to allow consistent processing of values.
MAPSTD(X,FP) takes parameters as struct: FP.ymean, FP.ystd.
MAPSTD('apply',X,PS) returns Y, given X and settings PS.
MAPSTD('reverse',Y,PS) returns X, given Y and settings PS.
MAPSTD('dx',X,Y,PS) returns MxNxQ derivative of Y w/respect to X.
MAPSTD('dx',X,[],PS) returns the derivative, less efficiently.
MAPSTD('name') returns the name of this process method.
MAPSTD('pdefaults') returns default process parameter structure.
MAPSTD('pdesc') returns the process parameter descriptions.
MAPSTD('pcheck',fp) throws an error if any parameter is illegal.
Examples
Here is how to format a matrix so that the minimum and maximum
values of each row are mapped to default mean and std of 0 and 1.
x1 = [1 2 4; 1 1 1; 3 2 2; 0 0 0]
[y1,ps] = mapstd(x1)
Next, we apply the same processing settings to new values.
x2 = [5 2 3; 1 1 1; 6 7 3; 0 0 0]
y2 = mapstd('apply',x2,ps)
Here we reverse the processing of y1 to get x1 again.
x1_again = mapstd('reverse',y1,ps)
Algorithm
It is assumed that X has only finite real values, and that
the elements of each row are not all equal.
y = (x-xmean)*(ystd/xstd) + ymean;
See also MAPMINMAX, FIXUNKNOWNS, PROCESSPCA, REMOVECONSTANTROWS
ApplicationRoot\WavixIV\neural501
16-Jun-2006 21:37:02
4512 bytes
MAXLINLR Maximum learning rate for a linear layer.
Syntax
lr = maxlinlr(P)
lr = maxlinlr(P,'bias')
Description
MAXLINLR is used to calculate learning rates for NEWLIN.
MAXLINLR(P) takes one argument,
P - RxQ matrix of input vectors.
and returns the maximum learning rate for a linear layer
without a bias that is to be trained only on the vectors in P.
MAXLINLR(P,'bias') return the maximum learning rate for
a linear layer with a bias.
Examples
Here we define a batch of 4 2-element input vectors and
find the maximum learning rate for a linear layer with
a bias.
P = [1 2 -4 7; 0.1 3 10 6];
lr = maxlinlr(P,'bias')
See also LEARNWH.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:18:56
1137 bytes
MIDPOINT Midpoint weight initialization function.
Syntax
W = midpoint(S,PR)
Description
MIDPOINT is a weight initialization function that
sets weight (row) vectors to the center of the
input ranges.
MIDPOINT(S,PR) takes two arguments,
S - Number of rows (neurons).
PR - Rx2 matrix of input value ranges = [Pmin Pmax].
and returns an SxR matrix with rows set to (Pmin+Pmax)'/2.
Examples
Here initial weight values are calculated for a 5 neuron
layer with input elements ranging over [0 1] and [-2 2].
W = midpoint(5,[0 1; -2 2])
Network Use
You can create a standard network that uses MIDPOINT to initialize
weights by calling NEWC.
To prepare the weights and the bias of layer i of a custom network
to initialize with MIDPOINT:
1) Set NET.initFcn to 'initlay'.
(NET.initParam will automatically become INITLAY's default parameters.)
2) Set NET.layers{i}.initFcn to 'initwb'.
3) Set each NET.inputWeights{i,j}.initFcn to 'midpoint'.
Set each NET.layerWeights{i,j}.initFcn to 'midpoint';
To initialize the network call INIT.
See NEWC for initialization examples.
See also INITWB, INITLAY, INIT.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:30
1861 bytes
MINMAX Ranges of matrix rows.
Syntax
pr = minmax(p)
Description
MINMAX(P) takes one argument,
P - RxQ matrix.
and returns the Rx2 matrix PR of minimum and maximum values
for each row of P.
Alternately, P can be an MxN cell array of matrices. Each matrix
P{i,j} should Ri rows and Q columns. In this case, MINMAX returns
an Mx1 cell array where the mth matrix is an Rix2 matrix of the
minimum and maximum values of elements for the matrics on the
ith row of P.
Examples
p = [0 1 2; -1 -2 -0.5]
pr = minmax(p)
p = {[0 1; -1 -2] [2 3 -2; 8 0 2]; [1 -2] [9 7 3]};
pr = minmax(p)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:18
1034 bytes
MSE Mean squared error performance function.
Syntax
perf = mse(E,Y,X,FP)
dPerf_dy = mse('dy',E,Y,X,perf,FP);
dPerf_dx = mse('dx',E,Y,X,perf,FP);
info = mse(code)
Description
MSE is a network performance function. It measures the
network's performance according to the mean of squared errors.
MSE(E,Y,X,PP) takes E and optional function parameters,
E - Matrix or cell array of error vectors.
Y - Matrix or cell array of output vectors. (ignored).
X - Vector of all weight and bias values (ignored).
FP - Function parameters (ignored).
and returns the mean squared error.
MSE('dy',E,Y,X,PERF,FP) returns derivative of PERF with respect to Y.
MSE('dx',E,Y,X,PERF,FP) returns derivative of PERF with respect to X.
MSE('name') returns the name of this function.
MSE('pnames') returns the name of this function.
MSE('pdefaults') returns the default function parameters.
Examples
Here a two layer feed-forward network is created with a 1-element
input ranging from -10 to 10, four hidden TANSIG neurons, and one
PURELIN output neuron.
net = newff([-10 10],[4 1],{'tansig','purelin'});
Here the network is given a batch of inputs P. The error
is calculated by subtracting the output A from target T.
Then the mean squared error is calculated.
p = [-10 -5 0 5 10];
t = [0 0 1 1 1];
y = sim(net,p)
e = t-y
perf = mse(e)
Note that MSE can be called with only one argument because the
other arguments are ignored. MSE supports those ignored arguments
to conform to the standard performance function argument list.
Network Use
You can create a standard network that uses MSE with NEWFF,
NEWCF, or NEWELM.
To prepare a custom network to be trained with MSE set
NET.performFcn to 'mse'. This will automatically set
NET.performParam to the empty matrix [], as MSE has no
performance parameters.
In either case, calling TRAIN or ADAPT will result
in MSE being used to calculate performance.
See NEWFF or NEWCF for examples.
See also MSEREG, MAE, DMSE
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:20
3657 bytes
MSEREG Mean squared error with regularization performance function.
Syntax
perf = msereg(E,Y,X,FP)
dPerf_dy = msereg('dy',E,Y,X,perf,FP);
dPerf_dx = msereg('dx',E,Y,X,perf,FP);
info = msereg(code)
Description
MSEREG is a network performance function. It measures
network performance as the weight sum of two factors:
the mean squared error and the mean squared weights and biases.
MSEREG(E,Y,X,PP) takes E and optional function parameters,
E - Matrix or cell array of error vectors.
Y - Matrix or cell array of output vectors. (ignored).
X - Vector of all weight and bias values.
FP.ratio - Ratio of importance between errors and weights.
and returns the mean squared error, plus FP.reg times the mean
squared weights.
MSEREG('dy',E,Y,X,PERF,FP) returns derivative of PERF with respect to Y.
MSEREG('dx',E,Y,X,PERF,FP) returns derivative of PERF with respect to X.
MSEREG('name') returns the name of this function.
MSEREG('pnames') returns the name of this function.
MSEREG('pdefaults') returns the default function parameters.
Examples
Here a two layer feed-forward is created with a 1-element input
ranging from -2 to 2, four hidden TANSIG neurons, and one
PURELIN output neuron.
net = newff([-2 2],[4 1],{'tansig','purelin'},'trainlm','learngdm','msereg');
Here the network is given a batch of inputs P. The error is
calculated by subtracting the output A from target T. Then the
mean squared error is calculated using a ratio of 20/(20+1).
(Errors are 20 times as important as weight and bias values).
p = [-2 -1 0 1 2];
t = [0 1 1 1 0];
y = sim(net,p)
e = t-y
net.performParam.ratio = 20/(20+1);
perf = msereg(e,net)
Network Use
You can create a standard network that uses MSEREG with NEWFF,
NEWCF, or NEWELM.
To prepare a custom network to be trained with MSEREG, set
NET.performFcn to 'msereg'. This will automatically set
NET.performParam to MSEREG's default performance parameters.
In either case, calling TRAIN or ADAPT will result
in MSEREG being used to calculate performance.
See NEWFF or NEWCF for examples.
See also MSE, MAE, DMSEREG.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:20
4160 bytes
MSEREGEC Mean squared error with regularization and economization performance function.
Syntax
perf = mseregec(E,Y,X,FP)
dPerf_dy = mseregec('dy',E,Y,X,perf,FP);
dPerf_dx = mseregec('dx',E,Y,X,perf,FP);
info = mseregec(code)
Description
MSEREGEC is a network performance function. It measures
network performance as the weighted sum of three factors:
the mean squared error, the mean squared weights and biases,
and the mean squared output.
MSEREGEC(E,Y,X,PP) takes from these arguments,
E - SxQ error matrix or NxTS cell array of such matrices.
Y - SxQ error matrix or NxTS cell array of such matrices.
X - Vector of weight and bias values.
FP.reg - Importance of minimizing weights relative to errors.
FP.econ - Importance of minimizing outputs relative to errors.
and returns the mean squared error, plus FP.reg times the mean
squared weights, plus FP.econ times the mean squared output.
MSEREGEC('dy',E,Y,X,PERF,FP) returns derivative of PERF with respect to Y.
MSEREGEC('dx',E,Y,X,PERF,FP) returns derivative of PERF with respect to X.
MSEREGEC('name') returns the name of this function.
MSEREGEC('pnames') returns the name of this function.
MSEREGEC('pdefaults') returns the default function parameters.
Examples
Here a two layer feed-forward is created with a 1-element input
ranging from -2 to 2, four hidden TANSIG neurons, and one
PURELIN output neuron.
net = newff([-2 2],[4 1],{'tansig','purelin'},'trainlm','learngdm','msereg');
Here the network is given a batch of inputs P. The error is
calculated by subtracting the output A from target T. Then the
mean squared error is calculated using a ratio of 20/(20+1).
(Errors are 20 times as important as weight and bias values).
p = [-2 -1 0 1 2];
t = [0 1 1 1 0];
y = sim(net,p)
e = t-y
net.performParam.ratio = 20/(20+1);
perf = msereg(e,net)
Network Use
You can create a standard network that uses MSEREG with NEWFF,
NEWCF, or NEWELM.
To prepare a custom network to be trained with MSEREG, set
NET.performFcn to 'msereg'. This will automatically set
NET.performParam to MSEREG's default performance parameters.
In either case, calling TRAIN or ADAPT will result
in MSEREG being used to calculate performance.
See NEWFF or NEWCF for examples.
See also MSE, MAE, DMSEREG.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:22
4498 bytes
NEGDIST Negative distance weight function.
Syntax
Z = negdist(W,P,FP)
info = negdist(code)
dim = normprod('size',S,R,FP)
dp = normprod('dp',W,P,Z,FP)
dw = normprod('dw',W,P,Z,FP)
Description
NEGDIST is a weight function. Weight functions apply
weights to an input to get weighted inputs.
NEGDIST(W,P,FP) takes these inputs,
W - SxR weight matrix.
P - RxQ matrix of Q input (column) vectors.
FP - Row cell array of function parameters (optional, ignored).
and returns the SxQ matrix of negative vector distances.
NEGDIST(code) returns information about this function.
These codes are defined:
'deriv' - Name of derivative function.
'fullderiv' - Full derivative = 1, linear derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
NORMPROD('size',S,R,FP) takes the layer dimension S, input dimention R,
and function parameters, and returns the weight size [SxR].
NORMPROD('dp',W,P,Z,FP) returns the derivative of Z with respect to P.
NORMPROD('size',S,R,FP) returns the derivative of Z with respect to W.
Examples
Here we define a random weight matrix W and input vector P
and calculate the corresponding weighted input Z.
W = rand(4,3);
P = rand(3,1);
Z = negdist(W,P)
Network Use
You can create a standard network that uses NEGDIST
by calling NEWC or NEWSOM.
To change a network so an input weight uses NEGDIST, set
NET.inputWeight{i,j}.weightFcn to 'negdist'. For a layer weight
set NET.inputWeight{i,j}.weightFcn to 'negdist'.
In either case, call SIM to simulate the network with NEGDIST.
See NEWC or NEWSOM for simulation examples.
Algorithm
NEGDIST returns the negative Euclidean distance:
z = -sqrt(sum(w-p)^2)
See also SIM, DOTPROD, DIST
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:24
4302 bytes
NETINV Inverse transfer function.
Syntax
A = netinv(N,FP)
dA_dN = netinv('dn',N,A,FP)
info = netinv(code)
Description
NETINV is a transfer function. Transfer functions
calculate a layer's output from its net input.
NETINV(N,FP) takes inputs,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns 1/N.
NETINV('dn',N,A,FP) returns derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
NETINV('name') returns the name of this function.
NETINV('output',FP) returns the [min max] output range.
NETINV('active',FP) returns the [min max] active input range.
NETINV('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
NETINV('fpnames') returns the names of the function parameters.
NETINV('fpdefaults') returns the default function parameters.
Examples
Here we define 10 5-element net input vectors N, and calculate A.
n = rand(5,10);
a = netinv(n);
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'netinv';
See also TANSIG, LOGSIG
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:10
2662 bytes
NETPROD Product net input function.
Syntax
N = netprod({Z1,Z2,...,Zn},FP)
dN_dZj = netprod('dz',j,Z,N,FP)
INFO = netprod(CODE)
Description
NETPROD is a net input function. Net input functions
calculate a layer's net input by combining its weighted
inputs and biases.
NETPROD({Z1,Z2,...,Zn},FP) takes these arguments,
Zi - SxQ matrices in a row cell array.
FP - Row cell array of function parameters (optional, ignored).
Returns element-wise product of Z1 to Zn.
NETPROD(code) returns information about this function.
These codes are defined:
'deriv' - Name of derivative function.
'fullderiv' - Full NxSxQ derivative = 1, Element-wise SxQ derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
Examples
Here NETPROD combines two sets of weighted input
vectors (which we have defined ourselves).
z1 = [1 2 4;3 4 1];
z2 = [-1 2 2; -5 -6 1];
z = {z1,z2};
n = netprod({z})
Here NETPROD combines the same weighted inputs with
a bias vector. Because Z1 and Z2 each contain three
concurrent vectors, three concurrent copies of B must
be created with CONCUR so that all sizes match up.
b = [0; -1];
z = {z1, z2, concur(b,3)};
n = netprod(z)
Network Use
You can create a standard network that uses NETPROD
by calling NEWPNN or NEWGRNN.
To change a network so that a layer uses NETPROD, set
NET.layers{i}.netInputFcn to 'netprod'.
In either case, call SIM to simulate the network with NETPROD.
See NEWPNN or NEWGRNN for simulation examples.
See also NETWORK/SIM, DNETPROD, NETSUM, CONCUR
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:48
2841 bytes
NETSUM Sum net input function.
Syntax
N = netsum({Z1,Z2,...,Zn},FP)
dN_dZj = netsum('dz',j,Z,N,FP)
INFO = netsum(CODE)
Description
NETSUM is a net input function. Net input functions calculate
a layer's net input by combining its weighted inputs and bias.
NETSUM({Z1,Z2,...,Zn},FP) takes Z1-Zn and optional function parameters,
Zi - SxQ matrices in a row cell array.
FP - Row cell array of function parameters (ignored).
Returns element-wise sum of Z1 to Zn.
NETSUM('dz',j,{Z1,...,Zn},N,FP) returns the derivative of N with
respect to Zj. If FP is not supplied the default values are used.
if N is not supplied, or is [], it is calculated for you.
NETSUM('name') returns the name of this function.
NETSUM('type') returns the type of this function.
NETSUM('fpnames') returns the names of the function paramters.
NETSUM('fpdefaults') returns default function paramter values.
NETSUM('fpcheck',FP) throws an error for illegal function parameters.
NETSUM('fullderiv') returns 0 or 1, if the derivate is SxQ or NxSxQ.
Examples
Here NETSUM combines two sets of weighted input vectors and a bias.
We must use CONCUR to make B the same dimensions as Z1 and Z2.
z1 = [1 2 4; 3 4 1]
z2 = [-1 2 2; -5 -6 1]
b = [0; -1]
n = netsum({z1,z2,concur(b,3)})
Here we assign this net input function to layer i of a network.
net.layers{i}.netFcn = 'compet';
Use NEWP or NEWLIN to create a standard network that uses NETSUM.
See also NETPROD, NETINV, NETNORMALIZED
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:50
2526 bytes
NEWC Create a competitive layer.
Syntax
net = newc(PR,S,KLR,CLR)
Description
Competitive layers are used to solve classification
problems.
NET = NEWC(PR,S,KLR,CLR) takes these inputs,
PR - Rx2 matrix of min and max values for R input elements.
S - Number of neurons.
KLR - Kohonen learning rate, default = 0.01.
CLR - Conscience learning rate, default = 0.001.
Returns a new competitive layer.
Examples
Here is a set of four two-element vectors P.
P = [.1 .8 .1 .9; .2 .9 .1 .8];
To competitive layer can be used to divide these inputs
into two classes. First a two neuron layer is created
with two input elements ranging from 0 to 1, then it
is trained.
net = newc([0 1; 0 1],2);
net = train(net,P);
The resulting network can then be simulated and its
output vectors converted to class indices.
Y = sim(net,P)
Yc = vec2ind(Y)
Properties
Competitive layers consist of a single layer with the NEGDIST
weight function, NETSUM net input function, and the COMPET
transfer function.
The layer has a weight from the input, and a bias.
Weights and biases are initialized with MIDPOINT and INITCON.
Adaption and training are done with TRAINS and TRAINR,
which both update weight and bias values with the LEARNK
and LEARNCON learning functions.
See also SIM, INIT, ADAPT, TRAIN, TRAINS, TRAINR.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:50
3328 bytes
NEWCF Create a cascade-forward backpropagation network.
Syntax
net = newcf(Pr,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description
NEWCF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
PR - Rx2 matrix of min and max values for R input elements.
Si - Size of ith layer, for Nl layers.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'trainlm'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns an N layer cascade-forward backprop network.
The transfer functions TFi can be any differentiable transfer
function such as TANSIG, LOGSIG, or PURELIN.
The training function BTF can be any of the backprop training
functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.
*WARNING*: TRAINLM is the default training function because it
is very fast, but it requires a lot of memory to run. If you get
an "out-of-memory" error when training try doing one of these:
(1) Slow TRAINLM training, but reduce memory requirements, by
setting NET.trainParam.mem_reduc to 2 or more. (See HELP TRAINLM.)
(2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.
(3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
P = [0 1 2 3 4 5 6 7 8 9 10];
T = [0 1 2 3 4 3 2 1 2 3 4];
Here a two-layer cascade-forward network is created. The network's
input ranges from [0 to 10]. The first layer has five TANSIG
neurons, the second layer has one PURELIN neuron. The TRAINLM
network training function is to be used.
net = newcf([0 10],[5 1],{'tansig' 'purelin'});
Here the network is simulated and its output plotted against
the targets.
Y = sim(net,P);
plot(P,T,P,Y,'o')
Here the network is trained for 50 epochs. Again the network's
output is plotted.
net.trainParam.epochs = 50;
net = train(net,P,T);
Y = sim(net,P);
plot(P,T,P,Y,'o')
Algorithm
Cascade-forward networks consists of Nl layers using the DOTPROD
weight function, NETSUM net input function, and the specified
transfer functions.
The first layer has weights coming from the input. Each subsequent
layer has weights coming from the input and all previous layers.
All layers have biases. The last layer is the network output.
Each layer's weights and biases are initialized with INITNW.
Adaption is done with TRAINS which updates weights with the
specified learning function. Training is done with the specified
training function. Performance is measured according to the specified
performance function.
See also NEWFF, NEWELM, SIM, INIT, ADAPT, TRAIN, TRAINS
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:52
5309 bytes
NEWDTDNN Create a distributed time delay neural network.
Syntax
net = newdtdnn(PR,[D1 D2...DN1],[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description
NEWDTDNN(PR,[D1 D2...DN1],[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
PR - Rx2 matrix of min and max values for R input elements.
Di - Delay vector for the ith layer.
Si - Size of ith layer, for Nl layers.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'trainlm'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns an N layer distributed time delay neural network.
The transfer functions TFi can be any differentiable transfer
function such as TANSIG, LOGSIG, or PURELIN.
The training function BTF can be any of the backprop training
functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.
*WARNING*: TRAINLM is the default training function because it
is very fast, but it requires a lot of memory to run. If you get
an "out-of-memory" error when training try doing one of these:
(1) Slow TRAINLM training, but reduce memory requirements, by
setting NET.trainParam.mem_reduc to 2 or more. (See HELP TRAINLM.)
(2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.
(3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Examples
Here is a problem consisting of an input sequence P and target
sequence T that can be solved by a network with one delay.
P = {1 0 0 1 1 0 1 0 0 0 0 1 1 0 0 1};
T = {1 -1 0 1 0 -1 1 -1 0 0 0 1 0 -1 0 1};
Here a two-layer feed-forward network is created with input
delays of 0 and 1. The network's input ranges from [0 to 1].
The first layer has five TANSIG neurons, the second layer has one
PURELIN neuron. The TRAINLM network training function is to be used.
net = newdtdnn(minmax(P),{[0 1] [0 1]},[5 1],{'tansig' 'purelin'});
Here the network is simulated.
Y = sim(net,P)
Here the network is trained for 50 epochs. Again the network's
output is calculated.
net.trainParam.epochs = 50;
net = train(net,P,T);
Y = sim(net,P)
Algorithm
Feed-forward networks consists of Nl layers using the DOTPROD
weight function, NETSUM net input function, and the specified
transfer functions.
The first layer has weights coming from the input with the
specified input delays. Each subsequent layer has a weight coming
from the previous layer and specified layer delays. All layers have
biases. The last layer is the network output.
Each layer's weights and biases are initialized with INITNW.
Adaption is done with TRAINS which updates weights with the
specified learning function. Training is done with the specified
training function. Performance is measured according to the specified
performance function.
See also NEWCF, NEWELM, SIM, INIT, ADAPT, TRAIN, TRAINS
ApplicationRoot\WavixIV\neural501
14-Nov-2005 19:17:02
6133 bytes
NEWELM Create an Elman backpropagation network.
Syntax
net = newelm(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description
NET = NEWELM(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes several arguments,
PR - Rx2 matrix of min and max values for R input elements.
Si - Size of ith layer, for Nl layers.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'traingdx'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns an Elman network.
The training function BTF can be any of the backprop training
functions such as TRAINGD, TRAINGDM, TRAINGDA, TRAINGDX, etc.
*WARNING*: Algorithms which take large step sizes, such as TRAINLM,
and TRAINRP, etc., are not recommended for Elman networks. Because
of the delays in Elman networks the gradient of performance used
by these algorithms is only approximated making learning difficult
for large step algorithms.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Examples
Here is a series of Boolean inputs P, and another sequence T
which is 1 wherever P has had two 1's in a row.
P = round(rand(1,20));
T = [0 (P(1:end-1)+P(2:end) == 2)];
We would like the network to recognize whenever two 1's
occur in a row. First we arrange these values as sequences.
Pseq = con2seq(P);
Tseq = con2seq(T);
Next we create an Elman network whose input varies from 0 to 1,
and has five hidden neurons and 1 output.
net = newelm([0 1],[10 1],{'tansig','logsig'});
Then we train the network with a mean squared error goal of
0.1, and simulate it.
net = train(net,Pseq,Tseq);
Y = sim(net,Pseq)
Algorithm
Elman networks consists of Nl layers using the DOTPROD
weight function, NETSUM net input function, and the specified
transfer functions.
The first layer has weights coming from the input. Each subsequent
layer has a weight coming from the previous layer. All layers except
the last have a recurrent weight. All layers have biases. The last
layer is the network output.
Each layer's weights and biases are initialized with INITNW.
Adaption is done with TRAINS which updates weights with the
specified learning function. Training is done with the specified
training function. Performance is measured according to the specified
performance function.
See also NEWFF, NEWCF, SIM, INIT, ADAPT, TRAIN, TRAINS
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:52
4959 bytes
NEWFF Create a feed-forward backpropagation network.
Syntax
net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description
NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
PR - Rx2 matrix of min and max values for R input elements.
Si - Size of ith layer, for Nl layers.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'trainlm'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns an N layer feed-forward backprop network.
The transfer functions TFi can be any differentiable transfer
function such as TANSIG, LOGSIG, or PURELIN.
The training function BTF can be any of the backprop training
functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.
*WARNING*: TRAINLM is the default training function because it
is very fast, but it requires a lot of memory to run. If you get
an "out-of-memory" error when training try doing one of these:
(1) Slow TRAINLM training, but reduce memory requirements, by
setting NET.trainParam.mem_reduc to 2 or more. (See HELP TRAINLM.)
(2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.
(3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
P = [0 1 2 3 4 5 6 7 8 9 10];
T = [0 1 2 3 4 3 2 1 2 3 4];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has five TANSIG
neurons, the second layer has one PURELIN neuron. The TRAINLM
network training function is to be used.
net = newff(minmax(P),[5 1],{'tansig' 'purelin'});
Here the network is simulated and its output plotted against
the targets.
Y = sim(net,P);
plot(P,T,P,Y,'o')
Here the network is trained for 50 epochs. Again the network's
output is plotted.
net.trainParam.epochs = 50;
net = train(net,P,T);
Y = sim(net,P);
plot(P,T,P,Y,'o')
Algorithm
Feed-forward networks consist of Nl layers using the DOTPROD
weight function, NETSUM net input function, and the specified
transfer functions.
The first layer has weights coming from the input. Each subsequent
layer has a weight coming from the previous layer. All layers
have biases. The last layer is the network output.
Each layer's weights and biases are initialized with INITNW.
Adaption is done with TRAINS which updates weights with the
specified learning function. Training is done with the specified
training function. Performance is measured according to the specified
performance function.
See also NEWCF, NEWELM, SIM, INIT, ADAPT, TRAIN, TRAINS
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:54
5371 bytes
NEWFFTD Create a feed-forward input-delay backprop network.
Syntax
net = newfftd(PR,ID,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description
NEWFFTD(PR,ID,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
PR - Rx2 matrix of min and max values for R input elements.
ID - Input delay vector.
Si - Size of ith layer, for Nl layers.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'trainlm'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns an N layer feed-forward backprop network.
The transfer functions TFi can be any differentiable transfer
function such as TANSIG, LOGSIG, or PURELIN.
The training function BTF can be any of the backprop training
functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.
*WARNING*: TRAINLM is the default training function because it
is very fast, but it requires a lot of memory to run. If you get
an "out-of-memory" error when training try doing one of these:
(1) Slow TRAINLM training, but reduce memory requirements, by
setting NET.trainParam.mem_reduc to 2 or more. (See HELP TRAINLM.)
(2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.
(3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Examples
Here is a problem consisting of an input sequence P and target
sequence T that can be solved by a network with one delay.
P = {1 0 0 1 1 0 1 0 0 0 0 1 1 0 0 1};
T = {1 -1 0 1 0 -1 1 -1 0 0 0 1 0 -1 0 1};
Here a two-layer feed-forward network is created with input
delays of 0 and 1. The network's input ranges from [0 to 1].
The first layer has five TANSIG neurons, the second layer has one
PURELIN neuron. The TRAINLM network training function is to be used.
net = newfftd([0 1],[0 1],[5 1],{'tansig' 'purelin'});
Here the network is simulated.
Y = sim(net,P)
Here the network is trained for 50 epochs. Again the network's
output is calculated.
net.trainParam.epochs = 50;
net = train(net,P,T);
Y = sim(net,P)
Algorithm
Feed-forward networks consists of Nl layers using the DOTPROD
weight function, NETSUM net input function, and the specified
transfer functions.
The first layer has weights coming from the input with the
specified input delays. Each subsequent layer has a weight coming
from the previous layer. All layers have biases. The last layer
is the network output.
Each layer's weights and biases are initialized with INITNW.
Adaption is done with TRAINS which updates weights with the
specified learning function. Training is done with the specified
training function. Performance is measured according to the specified
performance function.
See also NEWCF, NEWELM, SIM, INIT, ADAPT, TRAIN, TRAINS
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:54
5597 bytes
NEWGRNN Design a generalized regression neural network.
Synopsis
net = newgrnn(P,T,SPREAD)
Description
Generalized regression neural networks are a kind
of radial basis network that is often used for function
approximation. GRNNs can be designed very quickly.
NEWGRNN(P,T,SPREAD) takes these inputs,
P - RxQ matrix of Q input vectors.
T - SxQ matrix of Q target class vectors.
SPREAD - Spread of radial basis functions, default = 1.0.
and returns a new generalized regression neural network.
The larger SPREAD is, the smoother the function approximation
will be. To fit data closely, use a SPREAD smaller than the
typical distance between input vectors. To fit the data more
smoothly use a larger SPREAD.
Examples
Here we design a radial basis network given inputs P
and targets T.
P = [1 2 3];
T = [2.0 4.1 5.9];
net = newgrnn(P,T);
Here the network is simulated for a new input.
P = 1.5;
Y = sim(net,P)
Properties
NEWGRNN creates a two layer network. The first layer has
has RADBAS neurons, calculates weighted inputs with DIST and
net input with NETPROD. The second layer has PURELIN neurons,
calculates weighted input with NORMPROD and net inputs with NETSUM.
Only the first layer has biases.
NEWGRNN sets the first layer weights to P', and the first
layer biases are all set to 0.8326/SPREAD, resulting in
radial basis functions that cross 0.5 at weighted inputs
of +/- SPREAD. The second layer weights W2 are set to T.
References:
P.D. Wasserman, Advanced Methods in Neural Computing, New York:
Van Nostrand Reinhold, pp. 155-61, 1993.
See also SIM, NEWRB, NEWGRNN, NEWPNN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:56
3130 bytes
NEWHOP Create a Hopfield recurrent network.
Syntax
net = newhop(T)
Description
Hopfield networks are used for pattern recall.
NEWHOP(T) takes one input argument,
T - RxQ matrix of Q target vectors. (Values must be +1 or -1.)
and returns a new Hopfield recurrent neural network with
stable points at the vectors in T.
Examples
Here we create a Hopfield network with two three-element
stable points T.
T = [-1 -1 1; 1 -1 1]';
net = newhop(T);
Below we check that the network is stable at these points by
using them as initial layer delay conditions. If the network is
stable we would expect that the outputs Y will be the same.
(Since Hopfield networks have no inputs, the second argument
to SIM is Q = 2 when using matrix notation).
Ai = T;
[Y,Pf,Af] = sim(net,2,[],Ai);
Y
To see if the network can correct a corrupted vector, run
the following code which simulates the Hopfield network for
five timesteps. (Since Hopfield networks have no inputs,
the second argument to SIM is {Q TS} = [1 5] when using cell
array notation.)
Ai = {[-0.9; -0.8; 0.7]};
[Y,Pf,Af] = sim(net,{1 5},{},Ai);
Y{1}
If you run the above code Y{1} will equal T(:,1) if the
network has managed to convert the corrupted vector Ai to
the nearest target vector.
Algorithm
Hopfield networks are designed to have stable layer outputs
as defined by user supplied targets. The algorithm
minimizes the number of unwanted stable points.
Properties
Hopfield networks consist of a single layer with the DOTPROD
weight function, NETSUM net input function, and the SATLINS
transfer function.
The layer has a recurrent weight from itself and a bias.
Reference
J. Li, A. N. Michel, W. Porod, "Analysis and synthesis of a
class of neural networks: linear systems operating on a
closed hypercube," IEEE Transactions on Circuits and Systems,
vol. 36, no. 11, pp. 1405-1422, November 1989.
See also SIM, SATLINS.
ApplicationRoot\WavixIV\neural501
25-Jan-2006 19:49:20
3482 bytes
NEWLIN Create a linear layer.
Syntax
net = newlin(PR,S,ID,LR)
Description
Linear layers are often used as adaptive filters
for signal processing and prediction.
NEWLIN(PR,S,ID,LR) takes these arguments,
PR - Rx2 matrix of min and max values for R input elements.
S - Number of elements in the output vector.
ID - Input delay vector, default = [0].
LR - Learning rate, default = 0.01;
and returns a new linear layer.
NET = NEWLIN(PR,S,0,P) takes an alternate argument,
P - Matrix of input vectors.
and returns a linear layer with the maximum stable
learning rate for learning with inputs P.
Examples
This code creates a single input (range of [-1 1] linear
layer with one neuron, input delays of 0 and 1, and a learning
rate of 0.01. It is simulated for an input sequence P1.
net = newlin([-1 1],1,[0 1],0.01);
P1 = {0 -1 1 1 0 -1 1 0 0 1};
Y = sim(net,P1)
Here targets T1 are defined and the layer adapts to them.
(Since this is the first call to ADAPT, the default input
delay conditions are used.)
T1 = {0 -1 0 2 1 -1 0 1 0 1};
[net,Y,E,Pf] = adapt(net,P1,T1); Y
Here the linear layer continues to adapt for a new sequence
using the previous final conditions PF as initial conditions.
P2 = {1 0 -1 -1 1 1 1 0 -1};
T2 = {2 1 -1 -2 0 2 2 1 0};
[net,Y,E,Pf] = adapt(net,P2,T2,Pf); Y
Here we initialize the layer's weights and biases to new values.
net = init(net);
Here we train the newly initialized layer on the entire sequence
for 200 epochs to an error goal of 0.1.
P3 = [P1 P2];
T3 = [T1 T2];
net.trainParam.epochs = 200;
net.trainParam.goal = 0.1;
net = train(net,P3,T3);
Y = sim(net,[P1 P2])
Algorithm
Linear layers consist of a single layer with the DOTPROD
weight function, NETSUM net input function, and PURELIN
transfer function.
The layer has a weight from the input and a bias.
Weights and biases are initialized with INITZERO.
Adaption and training are done with TRAINS and TRAINB,
which both update weight and bias values with LEARNWH.
Performance is measured with MSE.
See also NEWLIND, SIM, INIT, ADAPT, TRAIN, TRAINB, TRAINS.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:58
4451 bytes
NEWLIND Design a linear layer.
Syntax
net = newlind(P,T,Pi)
Description
NEWLIND(P,T,Pi) takes these input arguments,
P - RxQ matrix of Q input vectors.
T - SxQ matrix of Q target class vectors.
Pi - 1xID cell array of initial input delay states,
each element Pi{i,k} is an RixQ matrix, default = [].
and returns a linear layer designed to output T
(with minimum sum square error) given input P.
NEWLIND(P,T,Pi) can also solve for linear networks with input delays and
multiple inputs and layers by supplying input and target data in cell
array form:
P - NixTS cell array, each element P{i,ts} is an RixQ input matrix.
T - NtxTS cell array, each element P{i,ts} is an VixQ matrix.
Pi - NixID cell array, each element Pi{i,k} is an RixQ matrix, default = [].
returns a linear network with ID input delays, Ni network inputs, Nl layers,
and designed to output T (with minimum sum square error) given input P.
Examples
We would like a linear layer that outputs T given P
for the following definitions.
P = [1 2 3];
T = [2.0 4.1 5.9];
Here we use NETLIND to design such a linear network that minimizes
the sum squared error between its output Y and T.
net = newlind(P,T);
Y = sim(net,P)
We would like another linear layer that outputs the sequence T
given the sequence P and two initial input delay states Pi.
P = {1 2 1 3 3 2};
Pi = {1 3};
T = {5.0 6.1 4.0 6.0 6.9 8.0};
net = newlind(P,T,Pi);
Y = sim(net,P,Pi)
We would like a linear network with two outputs Y1 and Y2, that generate
sequences T1 and T2, given the sequences P1 and P2 with 3 initial input
delay states Pi1 for input 1, and 3 initial delays states Pi2 for input 2.
P1 = {1 2 1 3 3 2}; Pi1 = {1 3 0};
P2 = {1 2 1 1 2 1}; Pi2 = {2 1 2};
T1 = {5.0 6.1 4.0 6.0 6.9 8.0};
T2 = {11.0 12.1 10.1 10.9 13.0 13.0};
net = newlind([P1; P2],[T1; T2],[Pi1; Pi2]);
Y = sim(net,[P1; P2],[Pi1; Pi2]);
Y1 = Y(1,:)
Y2 = Y(2,:)
Algorithm
NEWLIND calculates weight W and bias B values for a
linear layer from inputs P and targets T by solving
this linear equation in the least squares sense:
[W b] * [P; ones] = T
See also SIM, NEWLIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:58
6225 bytes
NEWLRN Create a Layered-Recurrent network.
Syntax
net = newlrn(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description
NET = NEWLRN(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes several arguments,
PR - Rx2 matrix of min and max values for R input elements.
Si - Size of ith layer, for Nl layers.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'trainlm'. %ODJ 5/14/02
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns a Layered-Recurrent network.
The training function BTF can be any of the backprop training
functions such as TRAINLM, TRAINBFG, TRAINSCG, TRAINBR, etc.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Examples
Here is a series of Boolean inputs P, and another sequence T
which is 1 wherever P has had two 1's in a row.
P = round(rand(1,20));
T = [0 (P(1:end-1)+P(2:end) == 2)];
We would like the network to recognize whenever two 1's
occur in a row. First we arrange these values as sequences.
Pseq = con2seq(P);
Tseq = con2seq(T);
Next we create a layered-recurrent network whose input varies from 0 to 1,
and has five hidden neurons and 1 output.
net = newlrn(minmax(P),[10 1],{'tansig','logsig'});
Then we train the network with a mean squared error goal of
0.1, and simulate it.
net = train(net,Pseq,Tseq);
Y = sim(net,Pseq)
Algorithm
Layered-Recurrent networks consists of Nl layers using the DOTPROD
weight function, NETSUM net input function, and the specified
transfer functions.
The first layer has weights coming from the input. Each subsequent
layer has a weight coming from the previous layer. All layers except
the last have a recurrent weight. All layers have biases. The last
layer is the network output.
Each layer's weights and biases are initialized with INITNW.
Adaption is done with TRAINS which updates weights with the
specified learning function. Training is done with the specified
training function. Performance is measured according to the specified
performance function.
See also NEWFF, NEWCF, SIM, INIT, ADAPT, TRAIN, TRAINS
ApplicationRoot\WavixIV\neural501
14-Nov-2005 19:17:10
4753 bytes
NEWLVQ Create a learning vector quantization network.
Syntax
net = newlvq(PR,S1,PC,LR,LF)
Description
LVQ networks are used to solve classification
problems.
NET = NEWLVQ(PR,S1,PC,LR,LF) takes these inputs,
PR - Rx2 matrix of min and max values for R input elements.
S1 - Number of hidden neurons.
PC - S2 element vector of typical class percentages.
LR - Learning rate, default = 0.01.
LF - Learning function, default = 'learnlv1'.
Returns a new LVQ network.
The learning function LF can be LEARNLV1 or LEARNLV2.
LEARNLV2 should only be used to finish training of networks
already trained with LEARNLV1.
Examples
The input vectors P and target classes Tc below define
a classification problem to be solved by an LVQ network.
P = [-3 -2 -2 0 0 0 0 +2 +2 +3; ...
0 +1 -1 +2 +1 -1 -2 +1 -1 0];
Tc = [1 1 1 2 2 2 2 1 1 1];
Target classes Tc are converted to target vectors T. Then an
LVQ network is created (with inputs ranges obtained from P,
4 hidden neurons, and class percentages of 0.6 and 0.4)
and is trained.
T = ind2vec(Tc);
net = newlvq(minmax(P),4,[.6 .4]);
net = train(net,P,T);
The resulting network can be tested.
Y = sim(net,P)
Yc = vec2ind(Y)
Properties
NEWLVQ creates a two layer network. The first layer uses the
COMPET transfer function, calculates weighted inputs with NEGDIST, and
net input with NETSUM. The second layer has PURELIN neurons,
calculates weighted input with DOTPROD and net inputs with NETSUM.
Neither layer has biases.
First layer weights are initialized with MIDPOINT. The
second layer weights are set so that each output neuron i
has unit weights coming to it from PC(i) percent of the
hidden neurons.
Adaption and training are done with TRAINS and TRAINR,
which both update the first layer weights with the specified
learning functions.
See also SIM, INIT, ADAPT, TRAIN, TRAINS, TRAINR, LEARLV1, LEARNLV2.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:00
4075 bytes
NEWNARX Create a feed-forward backpropagation network with feedback from output to input.
Syntax
net = newnarx(PR,ID,OD,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description
NEWNARX(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
PR - Rx2 matrix of min and max values for R input elements.
ID - Input delay vector.
OD - Output delay vector.
Si - Size of ith layer, for Nl layers.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'trainlm'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns an N layer feed-forward backprop network with external feedback.
The transfer functions TFi can be any differentiable transfer
function such as TANSIG, LOGSIG, or PURELIN.
The d delays from output to input FBD must be integer values greater than
zero placed in a row vector.
The training function BTF can be any of the backprop training
functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.
*WARNING*: TRAINLM is the default training function because it
is very fast, but it requires a lot of memory to run. If you get
an "out-of-memory" error when training try doing one of these:
(1) Slow TRAINLM training, but reduce memory requirements, by
setting NET.trainParam.mem_reduc to 2 or more. (See HELP TRAINLM.)
(2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.
(3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Examples
Here is a problem consisting of sequences of inputs P and targets T
that we would like to solve with a network.
P = {[0] [1] [1] [0] [-1] [-1] [0] [1] [1] [0] [-1]};
T = {[0] [1] [2] [2] [1] [0] [1] [2] [1] [0] [1]};
Here a two-layer feed-forward network with a two-delay input
and two-delay feedback is created. The network's input ranges
from [0 to 10]. The first layer has five TANSIG neurons, the
second layer has one PURELIN neuron. The TRAINLM network
training function is to be used.
net = newnarx(minmax(P),[0 1],[1 2],[5 1],{'tansig' 'purelin'});
Here the network is simulated and its output plotted against
the targets.
Y = sim(net,P);
plot(1:11,[T{:}],1:11,[Y{:}],'o')
Here the network is trained for 50 epochs. Again the network's
output is plotted.
net = train(net,P,T);
Yf = sim(net,P);
plot(1:11,[T{:}],1:11,[Y{:}],'o',1:11,[Yf{:}],'+')
Algorithm
Feed-forward networks consist of Nl layers using the DOTPROD
weight function, NETSUM net input function, and the specified
transfer functions.
The first layer has weights coming from the input. Each subsequent
layer has a weight coming from the previous layer. All layers
have biases. The last layer is the network output.
Each layer's weights and biases are initialized with INITNW.
Adaption is done with TRAINS which updates weights with the
specified learning function. Training is done with the specified
training function. Performance is measured according to the specified
performance function.
See also NEWCF, NEWELM, SIM, INIT, ADAPT, TRAIN, TRAINS
ApplicationRoot\WavixIV\neural501
14-Nov-2005 19:17:12
4468 bytes
NEWNARXSP Create an NARX network in series-parallel arrangement.
Syntax
net = newnarxsp({PR1 PR2},PR,ID,OD,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description
NEWNARXSP({PR1 PR2},ID,OD,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
PRi - Rix2 matrix of min and max values for Ri input elements.
ID - Input delay vector.
OD - Output delay vector.
Si - Size of ith layer, for Nl layers.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'trainlm'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns an N layer feed-forward backprop network with external feedback.
The transfer functions TFi can be any differentiable transfer
function such as TANSIG, LOGSIG, or PURELIN.
The d delays from output to input FBD must be integer values greater than
zero placed in a row vector.
The training function BTF can be any of the backprop training
functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.
*WARNING*: TRAINLM is the default training function because it
is very fast, but it requires a lot of memory to run. If you get
an "out-of-memory" error when training try doing one of these:
(1) Slow TRAINLM training, but reduce memory requirements, by
setting NET.trainParam.mem_reduc to 2 or more. (See HELP TRAINLM.)
(2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.
(3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Examples
Here is a problem consisting of sequences of inputs P and targets T
that we would like to solve with a network.
P = {[0] [1] [1] [0] [-1] [-1] [0] [1] [1] [0] [-1]};
T = {[0] [1] [2] [2] [1] [0] [1] [2] [1] [0] [1]};
PT = [P;T];
Here a two-layer feed-forward network with a two-delay input
and two-delay feedback is created. The network's input ranges
from [0 to 10]. The first layer has five TANSIG neurons, the
second layer has one PURELIN neuron. The TRAINLM network
training function is to be used.
net = newnarxsp(minmax(PT),[1 2],[1 2],[5 1],{'tansig' 'purelin'});
Here the network is simulated and its output plotted against
the targets.
Y = sim(net,P);
plot(1:11,[T{:}],1:11,[Y{:}],'o')
Here the network is trained for 50 epochs. Again the network's
output is plotted.
net = train(net,PT,T);
Yf = sim(net,P);
plot(1:11,[T{:}],1:11,[Y{:}],'o',1:11,[Yf{:}],'+')
Algorithm
Feed-forward networks consist of Nl layers using the DOTPROD
weight function, NETSUM net input function, and the specified
transfer functions.
The first layer has weights coming from the input. Each subsequent
layer has a weight coming from the previous layer. All layers
have biases. The last layer is the network output.
Each layer's weights and biases are initialized with INITNW.
Adaption is done with TRAINS which updates weights with the
specified learning function. Training is done with the specified
training function. Performance is measured according to the specified
performance function.
See also NEWCF, NEWELM, SIM, INIT, ADAPT, TRAIN, TRAINS
ApplicationRoot\WavixIV\neural501
14-Nov-2005 19:17:14
4547 bytes
NEWNET Notice regarding GUI.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:19:10
562 bytes
NEWP Create a perceptron.
Syntax
net = newp(pr,s,tf,lf)
Description
Perceptrons are used to solve simple (i.e. linearly
separable) classification problems.
NET = NEWP(PR,S,TF,LF) takes these inputs,
PR - Rx2 matrix of min and max values for R input elements.
S - Number of neurons.
TF - Transfer function, default = 'hardlim'.
LF - Learning function, default = 'learnp'.
Returns a new perceptron.
The transfer function TF can be HARDLIM or HARDLIMS.
The learning function LF can be LEARNP or LEARNPN.
Examples
This code creates a perceptron layer with one 2-element
input (ranges [0 1] and [-2 2]) and one neuron. (Supplying
only two arguments to NEWP results in the default perceptron
learning function LEARNP being used.)
net = newp([0 1; -2 2],1);
Now we define a problem, an OR gate, with a set of four
2-element input vectors P and the corresponding four
1-element targets T.
P = [0 0 1 1; 0 1 0 1];
T = [0 1 1 1];
Here we simulate the network's output, train for a
maximum of 20 epochs, and then simulate it again.
Y = sim(net,P)
net.trainParam.epochs = 20;
net = train(net,P,T);
Y = sim(net,P)
Notes
Perceptrons can classify linearly separable classes in a
finite amount of time. If input vectors have a large variance
in their lengths, the LEARNPN can be faster than LEARNP.
Properties
Perceptrons consist of a single layer with the DOTPROD
weight function, the NETSUM net input function, and the specified
transfer function.
The layer has a weight from the input and a bias.
Weights and biases are initialized with INITZERO.
Adaption and training are done with TRAINS and TRAINC,
which both update weight and bias values with the specified
learning function. Performance is measured with MAE.
See also SIM, INIT, ADAPT, TRAIN, HARDLIM, HARDLIMS, LEARNP, LEARNPN, TRAINB, TRAINS.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:00
3450 bytes
NEWPNN Design a probabilistic neural network.
Synopsis
net = newpnn(P,T,SPREAD)
Description
Probabilistic neural networks are a kind of radial
basis network suitable for classification problems.
NET = NEWPNN(P,T,SPREAD) takes two or three arguments,
P - RxQ matrix of Q input vectors.
T - SxQ matrix of Q target class vectors.
SPREAD - Spread of radial basis functions, default = 0.1.
and returns a new probabilistic neural network.
If SPREAD is near zero the network will act as a nearest
neighbor classifier. As SPREAD becomes larger the designed
network will take into account several nearby design vectors.
Examples
Here a classification problem is defined with a set of
inputs P and class indices Tc.
P = [1 2 3 4 5 6 7];
Tc = [1 2 3 2 2 3 1];
Here the class indices are converted to target vectors,
and a PNN is designed and tested.
T = ind2vec(Tc)
net = newpnn(P,T);
Y = sim(net,P)
Yc = vec2ind(Y)
Algorithm
NEWPNN creates a two layer network. The first layer has RADBAS
neurons, and calculates its weighted inputs with DIST, and its net
input with NETPROD. The second layer has COMPET neurons, and
calculates its weighted input with DOTPROD and its net inputs with
NETSUM. Only the first layer has biases.
NEWPNN sets the first layer weights to P', and the first
layer biases are all set to 0.8326/SPREAD resulting in
radial basis functions that cross 0.5 at weighted inputs
of +/- SPREAD. The second layer weights W2 are set to T.
References
P.D. Wasserman, Advanced Methods in Neural Computing, New York:
Van Nostrand Reinhold, pp. 35-55, 1993.
See also SIM, IND2VEC, VEC2IND, NEWRB, NEWRBE, NEWGRNN.
ApplicationRoot\WavixIV\neural501
25-Jan-2006 19:49:22
3184 bytes
NEWRB Design a radial basis network.
Synopsis
[net,tr] = newrb(P,T,GOAL,SPREAD,MN,DF)
Description
Radial basis networks can be used to approximate
functions. NEWRB adds neurons to the hidden
layer of a radial basis network until it meets
the specified mean squared error goal.
NEWRB(PR,T,GOAL,SPREAD,MN,DF) takes these arguments,
P - RxQ matrix of Q input vectors.
T - SxQ matrix of Q target class vectors.
GOAL - Mean squared error goal, default = 0.0.
SPREAD - Spread of radial basis functions, default = 1.0.
MN - Maximum number of neurons, default is Q.
DF - Number of neurons to add between displays, default = 25.
and returns a new radial basis network.
The larger that SPREAD is the smoother the function approximation
will be. Too large a spread means a lot of neurons will be
required to fit a fast changing function. Too small a spread
means many neurons will be required to fit a smooth function,
and the network may not generalize well. Call NEWRB with
different spreads to find the best value for a given problem.
Examples
Here we design a radial basis network given inputs P
and targets T.
P = [1 2 3];
T = [2.0 4.1 5.9];
net = newrb(P,T);
Here the network is simulated for a new input.
P = 1.5;
Y = sim(net,P)
Algorithm
NEWRB creates a two layer network. The first layer has RADBAS
neurons, and calculates its weighted inputs with DIST, and
its net input with NETPROD. The second layer has PURELIN neurons,
calculates its weighted input with DOTPROD and its net inputs with
NETSUM. Both layers have biases.
Initially the RADBAS layer has no neurons. The following steps
are repeated until the network's mean squared error falls below GOAL
or the maximum number of neurons are reached:
1) The network is simulated
2) The input vector with the greatest error is found
3) A RADBAS neuron is added with weights equal to that vector.
4) The PURELIN layer weights are redesigned to minimize error.
See also SIM, NEWRBE, NEWGRNN, NEWPNN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:02
6511 bytes
NEWRBE Design an exact radial basis network.
Synopsis
net = newrbe(P,T,SPREAD)
Description
Radial basis networks can be used to approximate functions.
NEWRBE very quickly designs a radial basis network with
zero error on the design vectors.
NEWRBE(P,T,SPREAD) takes two or three arguments,
P - RxQ matrix of Q input vectors.
T - SxQ matrix of Q target class vectors.
SPREAD - of radial basis functions, default = 1.0.
and returns a new exact radial basis network.
The larger that SPREAD, is the smoother the function approximation
will be. Too large a spread can cause numerical problems.
Examples
Here we design a radial basis network, given inputs P
and targets T.
P = [1 2 3];
T = [2.0 4.1 5.9];
net = newrbe(P,T);
Here the network is simulated for a new input.
P = 1.5;
Y = sim(net,P)
Algorithm
NEWRBE creates a two layer network. The first layer has RADBAS
neurons, and calculates its weighted inputs with DIST, and its
net input with NETPROD. The second layer has PURELIN neurons,
and calculates its weighted input with DOTPROD and its net inputs
with NETSUM. Both layer's have biases.
NEWRBE sets the first layer weights to P', and the first
layer biases are all set to 0.8326/SPREAD, resulting in
radial basis functions that cross 0.5 at weighted inputs
of +/- SPREAD.
The second layer weights IW{2,1} and biases b{2} are found by
simulating the first layer outputs A{1}, and then solving the
following linear expression:
[W{2,1} b{2}] * [A{1}; ones] = T
See also SIM, NEWRB, NEWGRNN, NEWPNN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:08
3374 bytes
NEWSOM Create a self-organizing map.
Syntax
net = newsom(PR,[d1,d2,...],tfcn,dfcn,olr,osteps,tlr,tns)
Description
Competitive layers are used to solve classification
problems.
NET = NEWSOM(PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TNS) takes,
PR - Rx2 matrix of min and max values for R input elements.
Di - Size of ith layer dimension, defaults = [5 8].
TFCN - Topology function, default = 'hextop'.
DFCN - Distance function, default = 'linkdist'.
OLR - Ordering phase learning rate, default = 0.9.
OSTEPS - Ordering phase steps, default = 1000.
TLR - Tuning phase learning rate, default = 0.02;
TND - Tuning phase neighborhood distance, default = 1.
and returns a new self-organizing map.
The topology function TFCN can be HEXTOP, GRIDTOP, or RANDTOP.
The distance function can be LINKDIST, DIST, or MANDIST.
Examples
The input vectors defined below are distributed over
an 2-dimension input space varying over [0 2] and [0 1].
This data will be used to train a SOM with dimensions [3 5].
P = [rand(1,400)*2; rand(1,400)];
net = newsom([0 2; 0 1],[3 5]);
plotsom(net.layers{1}.positions)
Here the SOM is trained for 25 epochs and the input vectors are
plotted with the map which the SOM's weights has formed.
net.trainParam.epochs = 25;
net = train(net,P);
plot(P(1,:),P(2,:),'.g','markersize',20)
hold on
plotsom(net.iw{1,1},net.layers{1}.distances)
hold off
Properties
SOMs consist of a single layer with the NEGDIST weight function,
NETSUM net input function, and the COMPET transfer function.
The layer has a weight from the input, but no bias.
The weight is initialized with MIDPOINT.
Adaption and training are done with TRAINS and TRAINR,
which both update the weight with LEARNSOM.
See also SIM, INIT, ADAPT, TRAIN, TRAINS, TRAINR.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:08
4497 bytes
NEWTR New training record with any number of optional fields.
Syntax
tr = newtr(epochs,'fieldname1','fieldname2',...)
tr = newtr([firstEpoch epochs],'fieldname1','fieldname2',...)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:17:52
746 bytes
==========================================
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:28
740 bytes
Copyright 2005 The MathWorks, Inc.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:28
877 bytes
Copyright 2005 The MathWorks, Inc.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:30
997 bytes
NNCOPY Copy matrix or cell array.
Syntax
nncopy(x,m,n)
Description
NNCOPY(X,M,N) takes two arguments,
X - RxC matrix (or cell array).
M - Number of vertical copies.
N - Number of horizontal copies.
and returns a new (R*M)x(C*N) matrix (or cell array).
Examples
x1 = [1 2 3; 4 5 6];
y1 = nncopy(x1,3,2)
x2 = {[1 2]; [3; 4; 5]}
y2 = nncopy(x2,2,3)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:18
677 bytes
NNETBHELP Neural Network Blockset on-line help function.
Points Web browser to the HTML help file corresponding to this
Neural Network Blockset block. The current block is queried
for its MaskType.
Typical usage:
set_param(gcb,'MaskHelp','web(nnetbhelp);');
ApplicationRoot\WavixIV\neural501
16-Jun-2006 21:37:02
1333 bytes
NNGUITOOLS A helper function for NNTOOL.
ApplicationRoot\WavixIV\neural501
27-Jun-2005 18:09:36
39744 bytes
Copyright 2005 The MathWorks, Inc.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:32
807 bytes
Copyright 1992-2005 The MathWorks, Inc. $Revision: 1.1.6.2 $ $Date: 2005/12/22 18:22:32 $
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:32
361 bytes
Copyright 2005 The MathWorks, Inc.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:34
185 bytes
NNT2C Update NNT 2.0 competitive layer.
Syntax
net = nnt2c(pr,w,klr,clr)
Description
NNT2C(PR,W,KLR,CLR) takes these arguments,
PR - Rx2 matrix of min and max values for R input elements.
W - SxR weight matrix.
KLR - Kohonen learning rate, default = 0.01.
CLR - Conscience learning rate, default = 0.001.
and returns a competitive layer.
Once a network has been updated it can be simulated, initialized
adapted, or trained with SIM, INIT, ADAPT, and TRAIN.
See also NEWC.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:10
1080 bytes
NNT2ELM Update NNT 2.0 Elman backpropagation network.
Syntax
net = nnt2elm(pr,w1,b1,w2,b2,btf,blf,pf)
Description
NNT2ELM(PR,W1,B1,W2,B2,BTF,BLF,PF) takes these arguments,
PR - Rx2 matrix of min and max values for R input elements.
W1 - S1x(R+S1) weight matrix.
B1 - S1x1 bias vector.
W2 - S2xS1 weight matrix.
B2 - S2x1 bias vector.
BTF - Backprop network training function, default = 'traingdx'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns a feed-forward network.
The training function BTF can be any of the backprop training
functions such as TRAINGD, TRAINGDM, TRAINGDA, and TRAINGDX.
Large step-size algorithms such as TRAINLM are not recommended
for Elman networks.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Once a network has been updated it can be simulated,
initialized, adapted, or trained with SIM, INIT, ADAPT, and TRAIN.
See also NEWELM.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:10
2287 bytes
NNT2FF Update NNT 2.0 feed-forward network.
Syntax
net = nnt2ff(pr,{w1 w2 ...},{b1 b2 ...},{tf1 tf2 ...},btf,blr,pf)
Description
NNT2FF(PR,{W1 W2 ...},{B1 B2 ...},{TF1 TF2 ...},BTF,BLR,PF) takes these arguments,
PR - Rx2 matrix of min and max values for R input elements.
Wi - Weight matrix for the ith layer.
Bi - Bias vector for the ith layer.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'traingdx'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
and returns a feed-forward network.
The training function BTF can be any of the backprop training
functions such as TRAINGD, TRAINGDM, TRAINGDA, TRAINGDX, or TRAINLM.
The learning function BLF can be either of the backpropagation
learning functions such as LEARNGD, or LEARNGDM.
The performance function can be any of the differentiable performance
functions such as MSE or MSEREG.
Once a network has been updated it can be simulated,
initialized, or trained with SIM, INIT, ADAPT, and TRAIN.
See also NEWFF, NEWCF, NEWFFTD, NEWELM.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:12
2155 bytes
NNT2HOP Update NNT 2.0 Hopfield recurrent network.
Syntax
net = nnt2p(w,b)
Description
NNT2HOP(W,B) takes these arguments,
W - SxS weight matrix.
B - Sx1 bias vector
and returns a perceptron.
Once a network has been updated it can be simulated,
initialized, adapted, or trained with SIM, INIT, ADAPT, and TRAIN.
See also NEWHOP.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:12
928 bytes
NNT2LIN Update NNT 2.0 linear layer.
Syntax
net = nnt2lin(pr,w,b,lr)
Description
NNT2LIN(PR,W,B) takes these arguments,
PR - Rx2 matrix of min and max values for R input elements.
W - SxR weight matrix.
B - Sx1 bias vector
LR - Learning rate, default = 0.01;
and returns a linear layer.
Once a network has been updated it can be simulated, initialized,
adapted, or trained with SIM, INIT, ADAPT, and TRAIN.
See also NEWLIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:14
1046 bytes
NNT2LVQ Update NNT 2.0 learning vector quantization network.
Syntax
net = nnt2lvq(pr,w1,w2,lr,lf)
Description
NNT2LVQ(PR,W1,W2,LR,LF) takes these arguments,
PR - Rx2 matrix of min and max values for R input elements.
W1 - S1xR weight matrix.
W2 - S2xS1 weight matrix.
LR - learning rate, default = 0.01.
LF - Learning function, default = 'learnlv2'.
and returns an LVQ network.
The learning function LF can be LEARNLV1 or LEARNLV2.
Once a network has been updated it can be simulated, initialized,
adapted, or trained with SIM, INIT, ADAPT, and TRAIN.
See also NEWLVQ.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:14
1244 bytes
NNT2P Update NNT 2.0 perceptron.
Syntax
net = nnt2p(pr,w,b,tf,lf)
Description
NNT2P(PR,W,B,TF,LF) takes these arguments,
PR - Rx2 matrix of min and max values for R input elements.
W - SxR weight matrix.
B - Sx1 bias vector
TF - Transfer function, default = 'hardlim'.
LF - Learning function, default = 'learnp'.
and returns a perceptron.
The transfer function TF can be HARDLIM or HARDLIMS.
The learning function LF can be LEARNP or LEARNPN.
Once a network has been updated it can be simulated, initialized,
adapted, or trained with SIM, INIT, ADAPT, and TRAIN.
See also NEWP.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:16
1268 bytes
NNT2RB Update NNT 2.0 radial basis network.
Syntax
net = nnt2rb(pr,w1,b1,w2,b2)
Description
NNT2RB(PR,W1,B1,W2,B2) takes these arguments,
PR - Rx2 matrix of min and max values for R input elements.
W1 - S1xR weight matrix.
B1 - S1x1 bias vector.
W2 - S2xS1 weight matrix.
B2 - S2x1 bias vector.
and returns a radial basis network.
Once a network has been updated it can be simulated, initialized,
adapted, or trained with SIM, INIT, ADAPT, and TRAIN.
See also NEWRB, NEWRBE, NEWGRNN, NEWPNN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:16
1568 bytes
NNT2SOM Update NNT 2.0 self-organizing map.
Syntax
net = nnt2som(pr,[d1 d2 ...],w,olr,osteps,tlr,tnd)
Description
NNT2SOM(PR,[D1,D2,...],W,OLR,OSTEPS,TLR,TDN) takes these arguments,
PR - Rx2 matrix of min and max values for R input elements.
Di - Size of ith layer dimension.
W - SxR weight matrix.
OLR - Ordering phase learning rate, default = 0.9.
OSTEPS - Ordering phase steps, default = 1000.
TLR - Tuning phase learning rate, default = 0.02;
TND - Tuning phase neighborhood distance, default = 1.
Returns a self-organizing map.
NNT2SOM assumes that the self-organizing map has a
grid topology (GRIDTOP) using link distances (LINKDIST).
This corresponds with the neighborhood function in NNT 2.0.
The new network will only output 1 for the neuron with the greatest
net input. In NNT2 the network would also output 0.5 for that neuron's
neighbors.
Once a network has been updated it can be simulated, initialized,
adapted, or trained with SIM, INIT, ADAPT, and TRAIN.
See also NEWSOM.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:18
1748 bytes
Copyright 2005 The MathWorks, Inc.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:34
160 bytes
NNTOBSF Warn that a function is obsolete. nntobsf(fcnName,line1,line2,...) *WARNING*: This function is undocumented as it may be altered at any time in the future without warning.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:18:24
697 bytes
NNTOBSU Warn that a function use is obsolete. nntobsu(fcnName,line1,line2,...) *WARNING*: This function is undocumented as it may be altered at any time in the future without warning.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:17:18
704 bytes
NNTWARN
Syntax
nntwarn on
nntwarn off
Description
NNTWARN allows Neural Network Toolbox warnings to be temporarily
turned off.
Code using obsolete Neural Network Toolbox functionality can
generate a lot of warnings. This function allows you to skip
those warnings. However, we encourage you to update your code
to ensure that it will run under future versions of the toolbox.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:18:16
827 bytes
Copyright 2005 The MathWorks, Inc.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:36
162 bytes
NORMC Normalize columns of a matrix.
Syntax
normc(M)
Description
NORMC(M) normalizes the columns of M to a length of 1.
Examples
m = [1 2; 3 4]
n = normc(m)
See also NORMR
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:20
552 bytes
NORMPROD Normalized dot product weight function.
Syntax
Z = normprod(W,P,FP)
info = normprod(code)
dim = normprod('size',S,R,FP)
dp = normprod('dp',W,P,Z,FP)
dw = normprod('dw',W,P,Z,FP)
Description
NORMPROD is a weight function. Weight functions apply
weights to an input to get weighted inputs.
NORMPROD(W,P,FP) takes these inputs,
W - SxR weight matrix.
P - RxQ matrix of Q input (column) vectors.
FP - Row cell array of function parameters (optional, ignored).
and returns the SxQ matrix of normalized dot products.
NORMPROD(code) returns information about this function.
These codes are defined:
'deriv' - Name of derivative function.
'pfullderiv' - Full input derivative = 1, linear input derivative = 0.
'wfullderiv' - Full weight derivative = 1, linear weight derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
NORMPROD('size',S,R,FP) takes the layer dimension S, input dimention R,
and function parameters, and returns the weight size [SxR].
NORMPROD('dp',W,P,Z,FP) returns the derivative of Z with respect to P.
NORMPROD('size',S,R,FP) returns the derivative of Z with respect to W.
Examples
Here we define a random weight matrix W and input vector P
and calculate the corresponding weighted input Z.
W = rand(4,3);
P = rand(3,1);
Z = normprod(W,P)
Network Use
You can create a standard network that uses NORMPROD
by calling NEWGRNN.
To change a network so an input weight uses NORMPROD, set
NET.inputWeight{i,j}.weightFcn to 'normprod. For a layer weight
set NET.inputWeight{i,j}.weightFcn to 'normprod.
In either case, call SIM to simulate the network with NORMPROD.
See NEWGRNN for simulation examples.
Algorithm
NORMPROD returns the dot product normalized by the sum
of the input vector elements.
z = w*p/sum(p)
See also DOTPROD.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:24
4262 bytes
NORMR Normalize rows of a matrix.
Syntax
normr(M)
Description
NORMR(M) normalizes the columns of M to a length of 1.
Examples
m = [1 2; 3 4]
n = normr(m)
See also NORMC.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:20
550 bytes
NULLPF Null performance function.
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:36
1245 bytes
PAUSE2 Pause procedure for specified time.
PAUSE2(N)
N - number of seconds (may be fractional).
Stops procedure for N seconds.
PAUSE2 differs from PAUSE in that pauses may take a fractional
number of seconds. PAUSE(1.2) will halt a procedure for 1 second.
PAUSE2(1.2) will halt a procedure for 1.2 seconds.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:17:22
613 bytes
PLOTBR Plot network performance for Bayesian regularization training.
Syntax
plotbr(tr,name,epoch)
Description
PLOTBR(TR,NAME,EPOCH) takes these inputs,
TR - Training record returned by train.
NAME - Training function name, default = ''.
EPOCH - Number of epochs, default = length of training record.
and plots the training sum squared error, the sum squared weights
and the effective number of parameters.
Example
Here are input values P and associated targets T.
p = [-1:.05:1];
t = sin(2*pi*p)+0.1*randn(size(p));
The code below creates a network and trains it on this problem.
net=newff([-1 1],[20,1],{'tansig','purelin'},'trainbr');
[net,tr] = train(net,p,t);
During training PLOTBR was called to display the training
record. You can also call PLOTBR directly with the final
training record TR, as shown below.
plotbr(tr)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:26
7066 bytes
PLOTEP Plot a weight-bias position on an error surface.
Syntax
h = plotep(w,b,e)
h = plotep(w,b,e,h)
Description
PLOTEP is used to show network learning on a plot
already created by PLOTES.
PLOTEP(W,B,E) takes these arguments
W - Current weight value.
B - Current bias value.
E - Current error.
and returns a vector H containing information for
continuing the plot.
PLOTEP(W,B,E,H) continues plotting using the vector H,
returned by the last call to PLOTEP.
H contains handles to dots plotted on the error surface,
so they can be deleted next time, as well as points on
the error contour, so they can be connected.
See also ERRSURF, PLOTES.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:28
1674 bytes
PLOTES Plot the error surface of a single input neuron.
Syntax
plotes(wv,bv,es,v)
Description
PLOTES(WV,BV,ES,V) takes these arguments,
WV - 1xN row vector of values of W.
BV - 1xM row vector of values of B.
ES - MxN matrix of error vectors.
V - View, default = [-37.5, 30].
and plots the error surface with a contour underneath.
Calculate the error surface ES with ERRSURF.
Examples
p = [3 2];
t = [0.4 0.8];
wv = -4:0.4:4; bv = wv;
ES = errsurf(p,t,wv,bv,'logsig');
plotes(wv,bv,ES,[60 30])
See also ERRSURF.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:28
2434 bytes
PLOTPC Plot a classification line on a perceptron vector plot.
Syntax
plotpc(W,b)
plotpc(W,b,h)
Description
PLOTPC(W,B) takes these inputs,
W - SxR weight matrix (R must be 3 or less).
B - Sx1 bias vector.
and returns a handle to a plotted classification line.
PLOTPC(W,B,H) takes these inputs,
H - Handle to last plotted line.
and deletes the last line before plotting the new one.
This function does not change the current axis and is intended
to be called after PLOTPV.
Example
The code below defines and plots the inputs and targets for a
perceptron:
p = [0 0 1 1; 0 1 0 1];
t = [0 0 0 1];
plotpv(p,t)
The following code creates a perceptron with inputs ranging
over the values in P, assigns values to its weights
and biases, and plots the resulting classification line.
net = newp(minmax(p),1);
net.iw{1,1} = [-1.2 -0.5];
net.b{1} = 1;
plotpc(net.iw{1,1},net.b{1})
See also PLOTPV.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:30
2698 bytes
PLOTPV Plot perceptron input/target vectors.
Syntax
plotpv(p,t)
plotpv(p,t,v)
Description
PLOTPV(P,T) take these inputs,
P - RxQ matrix of input vectors (R must be 3 or less).
T - SxQ matrix of binary target vectors (S must be 3 or less).
and plots column vectors in P with markers based on T.
PLOTPV(P,T,V) takes an additional input,
V - Graph limits = [x_min x_max y_min y_max]
and plots the column vectors with limits set by V.
Example
The code below defines and plots the inputs and targets
for a perceptron:
p = [0 0 1 1; 0 1 0 1];
t = [0 0 0 1];
plotpv(p,t)
The following code creates a perceptron with inputs ranging
over the values in P, assigns values to its weights
and biases, and plots the resulting classification line.
net = newp(minmax(p),1);
net.iw{1,1} = [-1.2 -0.5];
net.b{1} = 1;
plotpc(net.iw{1,1},net.b{1})
See also PLOTPC.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:32
2430 bytes
PLOTSOM Plot self-organizing map.
Syntax
plotsom(pos)
plotsom(W,d,nd)
Description
PLOTSOM(POS) takes one argument,
POS - NxS matrix of S N-dimension neural positions.
and plots the neuron positions with red dots, linking
the neurons within a Euclidean distance of 1.
PLOTSOM(W,D,ND) takes three arguments,
W - SxR weight matrix.
D - SxS distance matrix.
ND - Neighborhood distance, default = 1.
and plots the neuron's weight vectors with connections
between weight vectors whose neurons are within a
distance of 1.
Examples
Here are some neat plots of various layer topologies:
pos = hextop(5,6); plotsom(pos)
pos = gridtop(4,5); plotsom(pos)
pos = randtop(18,12); plotsom(pos)
pos = gridtop(4,5,2); plotsom(pos)
pos = hextop(4,4,3); plotsom(pos)
See NEWSOM for an example of plotting a layer's
weight vectors with the input vectors they map.
See also NEWSOM, LEARNSOM, INITSOM.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:32
2301 bytes
PLOTV Plot vectors as lines from the origin.
Syntax
plotv(m,t)
Description
PLOTV(M,T) takes two inputs,
M - RxQ matrix of Q column vectors with R elements.
T - (optional) the line plotting type, default = '-'.
and plots the column vectors of M.
R must be 2 or greater. If R is greater than two,
only the first two rows of M are used for the plot.
Examples
plotv([-.4 0.7 .2; -0.5 .1 0.5],'-')
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:34
863 bytes
PLOTVEC Plot vectors with different colors.
Syntax
plotvec(x,c,m)
Description
PLOTVEC(X,C,M) takes these inputs,
X - Matrix of (column) vectors.
C - Row vector of color coordinate.
M - Marker, default = '+'.
and plots each ith vector in X with a marker M, using the
ith value in C as the color coordinate.
PLOTVEC(X) only takes a matrix X and plots each ith
vector in X with marker '+' using the index i as the
color coordinate.
Examples
x = [0 1 0.5 0.7; -1 2 0.5 0.1];
c = [1 2 3 4];
plotvec(x,c)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:34
1618 bytes
PNORMC Pseudo-normalize columns of a matrix.
Syntax
pnormc(x,r)
Description
PNORMC(M,R) takes these arguments,
X - MxN matrix.
R - (optional) radius to normalize columns to, default = 1.
returns X with an additional row of elements which results
in new column vector lengths of R.
WARNING: For this function to work properly, the columns of X must
originally have vector lengths less than R.
Examples
x = [0.1 0.6; 0.3 0.1];
y = pnormc(x)
See also NORMC, NORMR.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:22
923 bytes
POSLIN Positive linear transfer function.
Syntax
A = poslin(N,FP)
dA_dN = poslin('dn',N,A,FP)
INFO = poslin(CODE)
Description
POSLIN is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
POSLIN(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, the SxQ matrix of N's elements clipped to [0, inf].
POSLIN('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
POSLIN('name') returns the name of this function.
POSLIN('output',FP) returns the [min max] output range.
POSLIN('active',FP) returns the [min max] active input range.
POSLIN('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
POSLIN('fpnames') returns the names of the function parameters.
POSLIN('fpdefaults') returns the default function parameters.
Examples
Here the code to create a plot of the POSLIN transfer function.
n = -5:0.1:5;
a = poslin(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'poslin';
Algorithm
poslin(n) = n, if n >= 0
= 0, if n <= 0
See also SIM, PURELIN, SATLIN, SATLINS.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:12
2599 bytes
POSTREG Postprocesses the trained network response with a linear regression.
Syntax
[m,b,r] = postreg(A,T)
[m,b,r] = postreg(A,T,X)
Description
POSTREG postprocesses the network training
set by performing a linear regression between one element
of the network response and the corresponding target.
POSTREG(A,T) takes these inputs,
A - 1xQ array of network outputs. One element of the network output.
T - 1xQ array of targets. One element of the target vector.
and returns and plots,
M - Slope of the linear regression.
B - Y intercept of the linear regression.
R - Regression R-value. R=1 means perfect correlation.
POSTREG({Atrain,Avalidation,Atest},{Ttrain,Tvalidate,Ttest})
returns and plots,
M = {Mtrain,Mvalidation,Mtest}
B = {Btrain,Bvalidation,Btest}
R = {Rtrain,Rvalidation,Rtest}
Training values are required. Validation and test values are optional.
POSTREG(A,T,X)
X - any value
returns M, B, and R without creating a plot.
Example
In this example we normalize a set of training data with
PRESTD, perform a principal component transformation on
the normalized data, create and train a network using the pca
data, simulate the network, unnormalize the output of the
network using POSTSTD, and perform a linear regression between
the network outputs (unnormalized) and the targets to check the
quality of the network training.
p = [-0.92 0.73 -0.47 0.74 0.29; -0.08 0.86 -0.67 -0.52 0.93];
t = [-0.08 3.4 -0.82 0.69 3.1];
[pn,pp1] = mapstd(p);
[tn,tp] = mapstd(t);
[ptrans,pp2] = processpca(pn,0.02);
net = newff(minmax(ptrans),[5 1],{'tansig' 'purelin'},'trainlm');
net = train(net,ptrans,tn);
an = sim(net,ptrans);
a = mapstrd('reverse',an,tp);
[m,b,r] = postreg(a,t);
Algorithm
Performs a linear regression between the network response
and the target, and computes the correlation coefficient
(R value) between the network response and the target.
See also PREMNMX, PREPCA.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:36
4944 bytes
PROCESSPCA Processes rows of matrix with principal component analysis.
Syntax
[y,ps] = processpca(maxfrac)
[y,ps] = processpca(x,fp)
y = processpca('apply',x,ps)
x = processpca('reverse',y,ps)
dx_dy = processpca('dx',x,y,ps)
dx_dy = processpca('dx',x,[],ps)
name = processpca('name');
fp = processpca('pdefaults');
names = processpca('pnames');
processpca('pcheck',fp);
Description
PROCESSPCA processes matrices using principal component analysis so
that each row is uncorrelated, the rows are in the order of the amount
they contribute to total variation, and rows whose contribution
to total variation are less than MAXFRAC are removed.
PROCESSPCA(X,MAXFRAC) takes X and an optional parameter,
X - NxQ matrix or a 1xTS row cell array of NxQ matrices.
MAXFRAC - Maximum fraction of variance for removed rows. (Default 0)
and returns,
Y - Each NxQ matrix with N-M rows deleted (optional).
PS - Process settings, to allow consistent processing of values.
PROCESSPCA(X,FP) takes parameters as struct: FP.maxfrac.
PROCESSPCA('apply',X,PS) returns Y, given X and settings PS.
PROCESSPCA('reverse',Y,PS) returns X, given Y and settings PS.
PROCESSPCA('dx',X,Y,PS) returns MxNxQ derivative of Y w/respect to X.
PROCESSPCA('dx',X,[],PS) returns the derivative, less efficiently.
PROCESSPCA('name') returns the name of this process method.
PROCESSPCA('pdefaults') returns default process parameter structure.
PROCESSPCA('pdesc') returns the process parameter descriptions.
PROCESSPCA('pcheck',fp) throws an error if any parameter is illegal.
Here is how to format a matrix with an independent row, a correlated row,
and a completely redundant row, so that its rows are uncorrelated and
the redundant row is dropped.
x1_independant = rand(1,5)
x1_correlated = rand(1,5) + x_independant;
x1_redundant = x_independant + x_correlated
x1 = [x1_independant; x1_correlated; x1_redundant]
[y1,ps] = processpca(x1)
Next, we apply the same processing settings to new values.
x2_independant = rand(1,5)
x2_correlated = rand(1,5) + x_independant;
x2_redundant = x_independant + x_correlated
x2 = [x2_independant; x2_correlated; x2_redundant];
y2 = processpca('apply',x2,ps)
Here we reverse the processing of y1 to get x1 again.
x1_again = processpca('reverse',y1,ps)
Algorithm
Values in rows whose elements are not all the same are set to:
y = 2*(x-minx)/(maxx-minx) - 1;
Values in rows with all the same value are set to 0.
See also MAPMINMAX, FIXUNKNOWNS, MAPSTD, REMOVECONSTANTROWS
ApplicationRoot\WavixIV\neural501
27-May-2006 13:14:30
5150 bytes
PURELIN Linear transfer function.
Syntax
A = purelin(N,FP)
dA_dN = purelin('dn',N,A,FP)
INFO = purelin(CODE)
Description
PURELIN is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
PURELIN(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, an SxQ matrix equal to N.
PURELIN('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
PURELIN('name') returns the name of this function.
PURELIN('output',FP) returns the [min max] output range.
PURELIN('active',FP) returns the [min max] active input range.
PURELIN('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
PURELIN('fpnames') returns the names of the function parameters.
PURELIN('fpdefaults') returns the default function parameters.
Examples
Here is the code to create a plot of the PURELIN transfer function.
n = -5:0.1:5;
a = purelin(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'purelin';
Algorithm
a = purelin(n) = n
See also SIM, DPURELIN, SATLIN, SATLINS.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:12
2615 bytes
QUANT Discretize values as multiples of a quantity.
Syntax
quant(x,q)
Description
QUANT(X,Q) takes these inputs,
X - Matrix, vector or scalar.
Q - Minimum value.
and returns values in X rounded to nearest multiple of Q
Examples
x = [1.333 4.756 -3.897];
y = quant(x,0.1)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:22
520 bytes
RADBAS Radial basis transfer function.
Syntax
A = radbas(N,FP)
dA_dN = radbas('dn',N,A,FP)
INFO = radbas(CODE)
Description
RADBAS is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
RADBAS(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, an SxQ matrix of the radial basis function
applied to each element of N.
RADBAS('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
RADBAS('name') returns the name of this function.
RADBAS('output',FP) returns the [min max] output range.
RADBAS('active',FP) returns the [min max] active input range.
RADBAS('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
RADBAS('fpnames') returns the names of the function parameters.
RADBAS('fpdefaults') returns the default function parameters.
Examples
Here we create a plot of the RADBAS transfer function.
n = -5:0.1:5;
a = radbas(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'radbas';
Algorithm
a = radbas(n) = exp(-n^2)
See also SIM, TRIBAS, DRADBAS.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:14
2623 bytes
RANDNC Normalized column weight initialization function.
Syntax
W = randnc(S,PR)
W = randnc(S,R)
Description
RANDNC is a weight initialization function.
RANDNC(S,P) takes these inputs,
S - Number of rows (neurons).
PR - Rx2 matrix of input value ranges = [Pmin Pmax].
and returns an SxR random matrix with normalized columns.
Can also be called as RANDNC(S,R).
See also RANDNR.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:32
771 bytes
RANDNR Normalized row weight initialization function.
Syntax
W = randnr(S,PR)
W = randnr(S,R)
Description
RANDNR is a weight initialization function.
RANDNR(S,P) takes these inputs,
S - Number of rows (neurons).
PR - Rx2 matrix of input value ranges = [Pmin Pmax].
and returns an SxR random matrix with normalized rows.
Can also be called as RANDNR(S,R).
See also RANDNC.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:32
762 bytes
RANDS Symmetric random weight/bias initialization function.
Syntax
W = rands(S,PR)
M = rands(S,R)
v = rands(S);
Description
RANDS is a weight/bias initialization function.
RANDS(S,PR) takes,
S - number of neurons.
PR - Rx2 matrix of R input ranges.
and returns an S-by-R weight matrix of random values between -1 and 1.
RANDS(S,R) returns an S-by-R matrix of random values.
RANDS(S) returns an S-by-1 vector of random values.
Examples
Here three sets of random values are generated with RANDS.
rands(4,[0 1; -2 2])
rands(4)
rands(2,3)
Network Use
To prepare the weights and the bias of layer i of a custom network
to be initialized with RANDS:
1) Set NET.initFcn to 'initlay'.
(NET.initParam will automatically become INITLAY's default parameters.)
2) Set NET.layers{i}.initFcn to 'initwb'.
3) Set each NET.inputWeights{i,j}.initFcn to 'rands'.
Set each NET.layerWeights{i,j}.initFcn to 'rands';
Set each NET.biases{i}.initFcn to 'rands'.
To initialize the network call INIT.
See also RANDNR, RANDNC, INITWB, INITLAY, INIT
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:34
1685 bytes
RANDTOP Random layer topology function.
Syntax
pos = randtop(dim1,dim2,...,dimN)
Description
RANDTOP calculates the neuron positions for layers whose
neurons are arranged in an N dimensional random pattern.
RANDTOP(DIM1,DIM2,...,DIMN) takes N arguments,
DIMi - Length of layer in dimension i.
and returns an NxS matrix of N coordinate vectors
where S is the product of DIM1*DIM2*...*DIMN.
Examples
This code creates and displays a two-dimensional layer
with 192 neurons arranged in a 16x12 random pattern.
pos = randtop(16,12); plotsom(pos)
This code plots the connections between the same neurons,
but shows each neuron at the location of its weight vector.
The weights are generated randomly so that the layer is
very unorganized, as is evident in the plot.
W = rands(192,2); plotsom(W,dist(pos))
See also GRIDTOP, HEXTOP.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:50
1417 bytes
REMOVECONSTANTROWS Remove matrix rows with constant values.
Syntax
[y,ps] = removeconstantrows(min_range)
[y,ps] = removeconstantrows(x,fp)
y = removeconstantrows('apply',x,ps)
x = removeconstantrows('reverse',y,ps)
dx_dy = removeconstantrows('dx',x,y,ps)
dx_dy = removeconstantrows('dx',x,[],ps)
name = removeconstantrows('name');
fp = removeconstantrows('pdefaults');
names = removeconstantrows('pnames');
removeconstantrows('pcheck',fp);
Description
REMOVECONSTANTROWS processes matrices by removing rows with constant values.
REMOVECONSTANTROWS(X,min_range) takes X and an optional parameter,
X - Single NxQ matrix or a 1xTS row cell array of NxQ matrices.
max_range - max range of values for row to be removed. (Default is 0)
and returns,
Y - Each MxQ matrix with N-M rows deleted (optional).
PS - Process settings, to allow consistent processing of values.
REMOVECONSTANTROWS(X,FP) takes parameters as struct: FP.max_range.
REMOVECONSTANTROWS('apply',X,PS) returns Y, given X and settings PS.
REMOVECONSTANTROWS('reverse',Y,PS) returns X, given Y and settings PS.
REMOVECONSTANTROWS('dx',X,Y,PS) returns MxNxQ derivative of Y w/respect to X.
REMOVECONSTANTROWS('dx',X,[],PS) returns the derivative, less efficiently.
REMOVECONSTANTROWS('name') returns the name of this process method.
REMOVECONSTANTROWS('pdefaults') returns default process parameter structure.
REMOVECONSTANTROWS('pdesc') returns the process parameter descriptions.
REMOVECONSTANTROWS('pcheck',fp) throws an error if any parameter is illegal.
Examples
Here is how to format a matrix so that the rows with
constant values are removed.
x1 = [1 2 4; 1 1 1; 3 2 2; 0 0 0]
[y1,ps] = removeconstantrows(x1)
Next, we apply the same processing settings to new values.
x2 = [5 2 3; 1 1 1; 6 7 3; 0 0 0]
y2 = removeconstantrows('apply',x2,ps)
Here we reverse the processing of y1 to get x1 again.
x1_again = removeconstantrows('reverse',y1,ps)
See also MAPMINMAX, FIXUNKNOWNS, MAPSTD, PROCESSPCA
ApplicationRoot\WavixIV\neural501
16-Jun-2006 21:37:02
4247 bytes
REMOVEROWS Remove matrix rows with specified indices.
Syntax
[y,ps] = removerows(x,ind)
[y,ps] = removerows(fp)
y = removerows('apply',x,ps)
x = removerows('reverse',y,ps)
dx_dy = removerows('dx',x,y,ps)
dx_dy = removerows('dx',x,[],ps)
name = removerows('name');
fp = removerows('pdefaults');
names = removerows('pnames');
removerows('pcheck',fp);
Description
REMOVEROWS processes matrices by removing rows with the specified indices.
REMOVEROWS(X,IND) takes X and an optional parameter,
X - NxQ matrix or a 1xTS row cell array of NxQ matrices.
IND - Vector of row indices to remove. (Default is [])
and returns,
Y - Each MxQ matrix, where M==N-length(IND). (optional).
PS - Process settings, to allow consistent processing of values.
REMOVEROWS(X,FP) takes parameters as struct: FP.ind.
REMOVEROWS('apply',X,PS) returns Y, given X and settings PS.
REMOVEROWS('reverse',Y,PS) returns X, given Y and settings PS.
REMOVEROWS('dx',X,Y,PS) returns MxNxQ derivative of Y w/respect to X.
REMOVEROWS('dx',X,[],PS) returns the derivative, less efficiently.
REMOVEROWS('name') returns the name of this process method.
REMOVEROWS('pdefaults') returns default process parameter structure.
REMOVEROWS('pdesc') returns the process parameter descriptions.
REMOVEROWS('pcheck',fp) throws an error if any parameter is illegal.
Examples
Here is how to format a matrix so that rows 2 and 4 are removed:
x1 = [1 2 4; 1 1 1; 3 2 2; 0 0 0]
[y1,ps] = removerows(x1,[2 4])
Next, we apply the same processing settings to new values.
x2 = [5 2 3; 1 1 1; 6 7 3; 0 0 0]
y2 = removerows('apply',x2,ps)
Here we reverse the processing of y1 to get x1 again.
x1_again = removerows('reverse',y1,ps)
Algorithm
In the reverse calculation, the unknown values of replaced
rows are represented with NaN values.
See also MAPMINMAX, FIXUNKNOWNS, MAPSTD, PROCESSPCA, REMOVECONSTANTROWS
ApplicationRoot\WavixIV\neural501
16-Jun-2006 21:37:02
4123 bytes
SATLIN Saturating linear transfer function.
Syntax
A = satlin(N,FP)
dA_dN = satlin('dn',N,A,FP)
INFO = satlin(CODE)
Description
SATLIN is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
SATLIN(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, the SxQ matrix of N's elements clipped to [0, 1].
SATLIN('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
SATLIN('name') returns the name of this function.
SATLIN('output',FP) returns the [min max] output range.
SATLIN('active',FP) returns the [min max] active input range.
SATLIN('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
SATLIN('fpnames') returns the names of the function parameters.
SATLIN('fpdefaults') returns the default function parameters.
Examples
Here is the code to create a plot of the SATLIN transfer function.
n = -5:0.1:5;
a = satlin(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'satlin';
Algorithm
a = satlin(n) = 0, if n <= 0
n, if 0 <= n <= 1
1, if 1 <= n
See also SIM, POSLIN, SATLINS, PURELIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:14
2746 bytes
SATLINS Symmetric saturating linear transfer function.
Syntax
A = satlins(N,FP)
dA_dN = satlins('dn',N,A,FP)
INFO = satlins(CODE)
Description
SATLINS is a transfer function. Transfer functions
calculate a layer's output from its net input.
SATLINS(N,FP) takes N and optional function parameters,
N - SxQ Matrix of net input (column) vectors.
FP - Row cell array of function parameters (ignored).
and returns values of N truncated into the interval [-1, 1].
SATLINS is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
SATLINS(N,FP) takes N and an optional argument,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (optional, ignored).
and returns A, the SxQ matrix of N's elements clipped to [-1, 1].
SATLINS('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
SATLINS('name') returns the name of this function.
SATLINS('output',FP) returns the [min max] output range.
SATLINS('active',FP) returns the [min max] active input range.
SATLINS('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
SATLINS('fpnames') returns the names of the function parameters.
SATLINS('fpdefaults') returns the default function parameters.
Examples
Here is the code to create a plot of the SATLINS transfer function.
n = -5:0.1:5;
a = satlins(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'satlins';
Algorithm
a = satlins(n) = -1, if n <= -1
n, if -1 <= n <= 1
1, if 1 <= n
See also SIM, SATLIN, POSLIN, PURELIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:16
3134 bytes
SCALPROD Scalar product weight function.
Syntax
Z = scalprod(W,P,FP)
dim = scalprod('size',S,R,FP)
dp = scalprod('dp',W,P,Z,FP)
dw = scalprod('dw',W,P,Z,FP)
info = scalrod(code)
Description
SCALROD is the scalar product weight function. Weight functions
apply weights to an input to get weighted inputs.
SCALPROD(W,P) takes these inputs,
W - 1x1 weight matrix.
P - RxQ matrix of Q input (column) vectors.
and returns the RxQ scalar product of W and P defined by:
Z = w*P
SCALPROD(code) returns information about this function.
These codes are defined:
'deriv' - Name of derivative function.
'fullderiv' - Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'pfullderiv' - Input: Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'wfullderiv' - Weight: Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
SCALPROD('size',S,R,FP) takes the layer dimension S, input dimention R,
and function parameters, and returns the weight size [1x1].
SCALPROD('dp',W,P,Z,FP) returns the derivative of Z with respect to P.
SCALPROD('dw',W,P,Z,FP) returns the derivative of Z with respect to W.
Examples
Here we define a random weight matrix W and input vector P
and calculate the corresponding weighted input Z.
W = rand(1,1);
P = rand(3,1);
Z = scalprod(W,P)
Network Use
To change a network so an input weight uses SCALPROD set
NET.inputWeight{i,j}.weightFcn to 'scalprod. For a layer weight
set NET.inputWeight{i,j}.weightFcn to 'scalprod.
In either case, call SIM to simulate the network with SCALPROD.
See NEWP and NEWLIN for simulation examples.
See also DOTPROD, SIM, DIST, NEGDIST, NORMPROD.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:26
3440 bytes
SEQ2CON Convert sequential vectors to concurrent vectors.
Syntax
b = seq2con(s)
Description
The neural network toolbox represents batches of vectors
with a matrix, and sequences of vectors with multiple
columns of a cell array.
SEQ2CON and CON2SEQ allow concurrent vectors to be converted
to sequential vectors, and back again.
SEQ2CON(S) takes one input,
S - NxTS cell array of matrices with M columns.
and returns,
B - Nx1 cell array of matrices with M*TS columns.
Example
Here three sequential values are converted to concurrent values.
p1 = {1 4 2}
p2 = seq2con(p1)
Here two sequences of vectors over three time steps
are converted to concurrent vectors.
p1 = {[1; 1] [5; 4] [1; 2]; [3; 9] [4; 1] [9; 8]}
p2 = seq2con(p1)
See also CON2SEQ, CONCUR.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:24
1324 bytes
SETX Set all network weight and bias values with a single vector.
Syntax
net = setx(net,X)
Description
This function sets a networks weight and biases to
a vector of values.
NET = SETX(NET,X)
NET - Neural network.
X - Vector of weight and bias values.
Examples
Here we create a network with a 2-element input, and one
layer of 3 neurons.
net = newff([0 1; -1 1],[3]);
The network has six weights (3 neurons * 2 input elements)
and three biases (3 neurons) for a total of 9 weight and bias
values. We can set them to random values as follows:
net = setx(net,rand(9,1));
We can then view the weight and bias values as follows:
net.iw{1,1}
net.b{1}
See also GETX, FORMX.
ApplicationRoot\WavixIV\neural501
14-Apr-2002 16:18:18
1508 bytes
SLBLOCKS Defines the block library for a specific Toolbox or Blockset.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:26
723 bytes
SOFTMAX Soft max transfer function.
Syntax
A = softmax(N,FP)
dA_dN = softmax('dn',N,A,FP)
INFO = softmax(CODE)
Description
SOFTMAX is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
SOFTMAX(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, the SxQ matrix of the softmax competitive function
applied to each column of N.
SOFTMAX('dn',N,A,FP) returns SxSxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
SOFTMAX('name') returns the name of this function.
SOFTMAX('output',FP) returns the [min max] output range.
SOFTMAX('active',FP) returns the [min max] active input range.
SOFTMAX('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
SOFTMAX('fpnames') returns the names of the function parameters.
SOFTMAX('fpdefaults') returns the default function parameters.
Examples
Here we define a net input vector N, calculate the output,
and plot both with bar graphs.
n = [0; 1; -0.5; 0.5];
a = softmax(n);
subplot(2,1,1), bar(n), ylabel('n')
subplot(2,1,2), bar(a), ylabel('a')
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'softmax';
Algorithm
a = softmax(n) = exp(n)/sum(exp(n))
See also SIM, COMPET.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:16
3132 bytes
SP2NARX Convert a series-parallel NARX network to parallel (feedback) form.
Syntax
net = sp2narx(NET)
Description
SP2NARX(NET) takes,
NET - Original NARX network in series-parallel form
and returns an NARX network in parallel (feedback) form.
Examples
Here a series-parallel narx network is created. The network's input ranges
from [-1 to 1]. The first layer has five TANSIG neurons, the
second layer has one PURELIN neuron. The TRAINLM network
training function is to be used.
net = newnarxsp({[-1 1] [-1 1]},[1 2],[1 2],[5 1],{'tansig' 'purelin'});
Here the network is converted from series parallel to parallel narx.
net2 = sp2narx(net);
See also NEWNARXSP, NEWNARX
ApplicationRoot\WavixIV\neural501
25-Jan-2006 19:49:22
1217 bytes
SRCHBAC One-dimensional minimization using backtracking.
Syntax
[a,gX,perf,retcode,delta,tol] = srchbac(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,tol,ch_perf)
Description
SRCHBAC is a linear search routine. It searches in a given direction
to locate the minimum of the performance function in that direction.
It uses a technique called backtracking.
SRCHBAC(NET,X,Pd,Tl,Ai,Q,TS,dX,gX,PERF,DPERF,DELTA,TOL,CH_PERF) takes these inputs,
NET - Neural network.
X - Vector containing current values of weights and biases.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
dX - Search direction vector.
gX - Gradient vector.
PERF - Performance value at current X.
DPERF - Slope of performance value at current X in direction of dX.
DELTA - Initial step size.
TOL - Tolerance on search.
CH_PERF - Change in performance on previous step.
and returns,
A - Step size which minimizes performance.
gX - Gradient at new minimum point.
PERF - Performance value at new minimum point.
RETCODE - Return code which has three elements. The first two elements correspond to
the number of function evaluations in the two stages of the search
The third element is a return code. These will have different meanings
for different search algorithms. Some may not be used in this function.
0 - normal; 1 - minimum step taken; 2 - maximum step taken;
3 - beta condition not met.
DELTA - New initial step size. Based on the current step size.
TOL - New tolerance on search.
Parameters used for the backstepping algorithm are:
alpha - Scale factor which determines sufficient reduction in perf.
beta - Scale factor which determines sufficiently large step size.
low_lim - Lower limit on change in step size.
up_lim - Upper limit on change in step size.
maxstep - Maximum step length.
minstep - Minimum step length.
scale_tol - Parameter which relates the tolerance tol to the initial step
size delta. Usually set to 20.
The defaults for these parameters are set in the training function which
calls it. See TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is an VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
Network Use
You can create a standard network that uses SRCHBAC with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINCGF using
the line search function SRCHBAC:
1) Set NET.trainFcn to 'traincgf'.
This will set NET.trainParam to TRAINCGF's default parameters.
2) Set NET.trainParam.searchFcn to 'srchbac'.
The SRCHBAC function can be used with any of the following
training functions: TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINCGF
network training function and the SRCHBAC search function are used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'traincgf');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.searchFcn = 'srchbac';
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
Algorithm
SRCHBAC locates the minimum of the performance function in
the search direction dX, using the backtracking algorithm
described on page 126 and 328 of Dennis and Schnabel.
(Numerical Methods for Unconstrained Optimization and Nonlinear Equations 1983).
See also SRCHBRE, SRCHCHA, SRCHGOL, SRCHHYB
References
Dennis and Schnabel, Numerical Methods for Unconstrained Optimization
and Nonlinear Equations, 1983.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:42
13488 bytes
SRCHBRE One-dimensional interval location using Brent's method.
Syntax
[a,gX,perf,retcode,delta,tol] = srchbre(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,tol,ch_perf)
Description
SRCHBRE is a linear search routine. It searches in a given direction
to locate the minimum of the performance function in that direction.
It uses a technique called Brent's method.
SRCHBRE(NET,X,Pd,Tl,Ai,Q,TS,dX,gX,PERF,DPERF,DELTA,TOL,CH_PERF) takes these inputs,
NET - Neural network.
X - Vector containing current values of weights and biases.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
dX - Search direction vector.
gX - Gradient vector.
PERF - Performance value at current X.
DPERF - Slope of performance value at current X in direction of dX.
DELTA - Initial step size.
TOL - Tolerance on search.
CH_PERF - Change in performance on previous step.
and returns,
A - Step size which minimizes performance.
gX - Gradient at new minimum point.
PERF - Performance value at new minimum point.
RETCODE - Return code which has three elements. The first two elements correspond to
the number of function evaluations in the two stages of the search
The third element is a return code. These will have different meanings
for different search algorithms. Some may not be used in this function.
0 - normal; 1 - minimum step taken; 2 - maximum step taken;
3 - beta condition not met.
DELTA - New initial step size. Based on the current step size.
TOL - New tolerance on search.
Parameters used for the brent algorithm are:
alpha - Scale factor which determines sufficient reduction in perf.
beta - Scale factor which determines sufficiently large step size.
bmax - Largest step size.
scale_tol - Parameter which relates the tolerance tol to the initial step
size delta. Usually set to 20.
The defaults for these parameters are set in the training function which
calls it. See TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is an VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
Network Use
You can create a standard network that uses SRCHBRE with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINCGF and
to use the line search function SRCHBRE:
1) Set NET.trainFcn to 'traincgf'.
This will set NET.trainParam to TRAINCGF's default parameters.
2) Set NET.trainParam.searchFcn to 'srchbre'.
The SRCHBRE function can be used with any of the following
training functions: TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINCGF
network training function and the SRCHBRE search function are to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'traincgf');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.searchFcn = 'srchbre';
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
Algorithm
SRCHBRE brackets the minimum of the performance function in
the search direction dX, using Brent's
algorithm described on page 46 of Scales (Introduction to
Non-Linear Estimation 1985). It is a hybrid algorithm based on
the golden section search and quadratic approximation.
See also SRCHBAC, SRCHCHA, SRCHGOL, SRCHHYB
References
Brent, Introduction to Non-Linear Estimation, 1985.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:44
11397 bytes
SRCHCHA One-dimensional minimization using the method of Charalambous.
Syntax
[a,gX,perf,retcode,delta,tol] = srchcha(net,X,Pd,Tl,Ai,Q,TS,dX,gX,
perf,dperf,delta,tol,ch_perf)
Description
SRCHCHA is a linear search routine. It searches in a given direction
to locate the minimum of the performance function in that direction.
It uses a technique based on the method of Charalambous.
SRCHCHA(NET,X,Pd,Tl,Ai,Q,TS,dX,gX,PERF,DPERF,DELTA,TOL,CH_PERF)
takes these inputs,
NET - Neural network.
X - Vector containing current values of weights and biases.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
dX - Search direction vector.
gX - Gradient vector.
PERF - Performance value at current X.
DPERF - Slope of performance value at current X in direction of dX.
DELTA - Initial step size.
TOL - Tolerance on search.
CH_PERF - Change in performance on previous step.
and returns,
A - Step size which minimizes performance.
gX - Gradient at new minimum point.
PERF - Performance value at new minimum point.
RETCODE - Return code which has three elements. The first two elements
correspond to the number of function evaluations in the two
stages of the search. The third element is a return code.
These will have different meanings for different search
algorithms. Some may not be used in this function.
0 - normal; 1 - minimum step taken; 2 - maximum step taken;
3 - beta condition not met.
DELTA - New initial step size. Based on the current step size.
TOL - New tolerance on search.
Parameters used for the Charalombous algorithm are:
alpha - Scale factor which determines sufficient reduction in perf.
beta - Scale factor which determines sufficiently large step size.
gama - Parameter to avoid small reductions in performance. Usually
set to 0.1.
scale_tol - Parameter which relates the tolerance tol to the initial step
size delta. Usually set to 20.
The defaults for these parameters are set in the training function which
calls it. See TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is an VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
Network Use
You can create a standard network that uses SRCHCHA with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINCGF using
the line search function SRCHCHA:
1) Set NET.trainFcn to 'traincgf'.
This will set NET.trainParam to TRAINCGF's default parameters.
2) Set NET.trainParam.searchFcn to 'srchcha'.
The SRCHCHA function can be used with any of the following
training functions: TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINCGF
network training function and the SRCHCHA search function are to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'traincgf');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.searchFcn = 'srchcha';
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
Algorithm
SRCHCHA locates the minimum of the performance function in
the search direction dX, using an algorithm based on
the method described in Charalambous (IEE Proc. vol. 139, no. 3, June 1992).
See also SRCHBAC, SRCHBRE, SRCHGOL, SRCHHYB
References
Charalambous, IEEE Proceedings, vol. 139, no. 3, June 1992.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:44
9472 bytes
SRCHGOL One-dimensional minimization using golden section search.
Syntax
[a,gX,perf,retcode,delta,tol] = srchgol(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,tol,ch_perf)
Description
SRCHGOL is a linear search routine. It searches in a given direction
to locate the minimum of the performance function in that direction.
It uses a technique called the golden section search.
SRCHGOL(NET,X,Pd,Tl,Ai,Q,TS,dX,gX,PERF,DPERF,DELTA,TOL,CH_PERF) takes these inputs,
NET - Neural network.
X - Vector containing current values of weights and biases.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
dX - Search direction vector.
gX - Gradient vector.
PERF - Performance value at current X.
DPERF - Slope of performance value at current X in direction of dX.
DELTA - Initial step size.
TOL - Tolerance on search.
CH_PERF - Change in performance on previous step.
and returns,
A - Step size which minimizes performance.
gX - Gradient at new minimum point.
PERF - Performance value at new minimum point.
RETCODE - Return code which has three elements. The first two elements correspond to
the number of function evaluations in the two stages of the search
The third element is a return code. These will have different meanings
for different search algorithms. Some may not be used in this function.
0 - normal; 1 - minimum step taken; 2 - maximum step taken;
3 - beta condition not met.
DELTA - New initial step size. Based on the current step size.
TOL - New tolerance on search.
Parameters used for the golden section algorithm are:
alpha - Scale factor which determines sufficient reduction in perf.
bmax - Largest step size.
scale_tol - Parameter which relates the tolerance tol to the initial step
size delta. Usually set to 20.
The defaults for these parameters are set in the training function which
calls it. See TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is an VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
Network Use
You can create a standard network that uses SRCHGOL with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINCGF using the
line search function SRCHGOL:
1) Set NET.trainFcn to 'traincgf'.
This will set NET.trainParam to TRAINCGF's default parameters.
2) Set NET.trainParam.searchFcn to 'srchgol'.
The SRCHGOL function can be used with any of the following
training functions: TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINCGF
network training function and the SRCHGOL search function are to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'traincgf');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.searchFcn = 'srchgol';
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
Algorithm
SRCHGOL locates the minimum of the performance function in
the search direction dX, using the
golden section search. It is based on the algorithm as
described on page 33 of Scales (Introduction to Non-Linear Estimation 1985).
See also SRCHBAC, SRCHBRE, SRCHCHA, SRCHHYB
References
Scales, Introduction to Non-Linear Estimation, 1985.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:46
8834 bytes
SRCHHYB One-dimensional minimization using a hybrid bisection-cubic search.
Syntax
[a,gX,perf,retcode,delta,tol] = srchhyb(net,X,P,T,Q,TS,dX,gX,perf,dperf,delta,tol,ch_perf)
Description
SRCHHYB is a linear search routine. It searches in a given direction
to locate the minimum of the performance function in that direction.
It uses a technique which is a combination of a bisection and a
cubic interpolation.
SRCHHYB(NET,X,Pd,Tl,Ai,Q,TS,dX,gX,PERF,DPERF,DELTA,TOL,CH_PERF) takes these inputs,
NET - Neural network.
X - Vector containing current values of weights and biases.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
dX - Search direction vector.
gX - Gradient vector.
PERF - Performance value at current X.
DPERF - Slope of performance value at current X in direction of dX.
DELTA - Initial step size.
TOL - Tolerance on search.
CH_PERF - Change in performance on previous step.
and returns,
A - Step size which minimizes performance.
gX - Gradient at new minimum point.
PERF - Performance value at new minimum point.
RETCODE - Return code which has three elements. The first two elements correspond to
the number of function evaluations in the two stages of the search
The third element is a return code. These will have different meanings
for different search algorithms. Some may not be used in this function.
0 - normal; 1 - minimum step taken; 2 - maximum step taken;
3 - beta condition not met.
DELTA - New initial step size. Based on the current step size.
TOL - New tolerance on search.
Parameters used for the hybrid bisection-cubic algorithm are:
alpha - Scale factor which determines sufficient reduction in perf.
beta - Scale factor which determines sufficiently large step size.
bmax - Largest step size.
scale_tol - Parameter which relates the tolerance tol to the initial step
size delta. Usually set to 20.
The defaults for these parameters are set in the training function which
calls it. See TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is an VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
Network Use
You can create a standard network that uses SRCHHYB with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINCGF using
the line search function SRCHHYB:
1) Set NET.trainFcn to 'traincgf'.
This will set NET.trainParam to TRAINCGF's default parameters.
2) Set NET.trainParam.searchFcn to 'srchhyb'.
The SRCHHYB function can be used with any of the following
training functions: TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINCGF
network training function and the SRCHHYB search function are to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'traincgf');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.searchFcn = 'srchhyb';
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
Algorithm
SRCHHYB locates the minimum of the performance function in
the search direction dX, using the hybrid
bisection-cubic interpolation algorithm described on page 50 of Scales.
(Introduction to Non-Linear Estimation 1985)
See also SRCHBAC, SRCHBRE, SRCHCHA, SRCHGOL
References
Scales, Introduction to Non-Linear Estimation, 1985.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:46
11492 bytes
SSE Sum squared error performance function.
Syntax
perf = sse(E,Y,X,FP)
dPerf_dy = sse('dy',E,Y,X,perf,FP);
dPerf_dx = sse('dx',E,Y,X,perf,FP);
info = sse(code)
Description
SSE is a network performance function. It measures
performance according to the sum of squared errors.
SSE(E,Y,X,PP) takes E and optional function parameters,
E - Matrix or cell array of error vectors.
Y - Matrix or cell array of output vectors. (ignored).
X - Vector of all weight and bias values (ignored).
FP - Function parameters (ignored).
and returns the sum squared error.
SSE('dy',E,Y,X,PERF,FP) returns derivative of PERF with respect to Y.
SSE('dx',E,Y,X,PERF,FP) returns derivative of PERF with respect to X.
SSE('name') returns the name of this function.
SSE('pnames') returns the name of this function.
SSE('pdefaults') returns the default function parameters.
Examples
Here a two layer feed-forward is created with a 1-element input
ranging from -10 to 10, four hidden TANSIG neurons, and one
PURELIN output neuron.
net = newff([-10 10],[4 1],{'tansig','purelin'});
Here the network is given a batch of inputs P. The error
is calculated by subtracting the output A from target T.
Then the sum squared error is calculated.
p = [-10 -5 0 5 10];
t = [0 0 1 1 1];
y = sim(net,p)
e = t-y
perf = sse(e)
Note that SSE can be called with only one argument because
the other arguments are ignored. SSE supports those arguments
to conform to the standard performance function argument list.
Network Use
To prepare a custom network to be trained with SSE set
NET.performFcn to 'sse'. This will automatically set
NET.performParam to the empty matrix [], as SSE has no
performance parameters.
Calling TRAIN or ADAPT will result in SSE being used to calculate
performance.
See also DSSE.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:22
3199 bytes
SUBSTRING Return part of a Java string.
ApplicationRoot\WavixIV\neural501
17-Aug-2004 16:42:24
287 bytes
TANSIG Hyperbolic tangent sigmoid transfer function.
Syntax
A = tansig(N,FP)
dA_dN = tansig('dn',N,A,FP)
INFO = tansig(CODE)
Description
TANSIG is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
TANSIG(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, the SxQ matrix of N's elements squashed into [-1 1].
TANSIG('dn',N,A,FP) returns derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
TANSIG('name') returns the name of this function.
TANSIG('output',FP) returns the [min max] output range.
TANSIG('active',FP) returns the [min max] active input range.
TANSIG('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
TANSIG('fpnames') returns the names of the function parameters.
TANSIG('fpdefaults') returns the default function parameters.
Examples
Here the code to create a plot of the TANSIG transfer function.
n = -5:0.1:5;
a = tansig(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'tansig';
Algorithm
a = tansig(n) = 2/(1+exp(-2*n))-1
This is mathematically equivalent to TANH(N). It differs
in that it runs faster than the MATLAB implementation of TANH,
but the results can have very small numerical differences. This
function is a good trade off for neural networks, where speed is
important and the exact shape of the transfer function is not.
See also SIM, DTANSIG, LOGSIG.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:18
3052 bytes
TEMPLATE_INIT_LAYER Template layer initialization function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNINIT to see a list of other layer initialization functions.
Syntax
net = template_init_layer(net,i)
Description
TEMPLATE_INIT_LAYER(NET,i) takes two arguments,
NET - Neural network.
i - Index of a layer.
and returns the network with layer i's weights and biases updated.
Network Use
To prepare a custom network to be initialized with TEMPLATE_INIT_LAYER:
1) Set NET.initFcn to 'initlay'.
(This will set NET.initParam to the empty matrix [] since
INITLAY has no initialization parameters.)
2) Set NET.layers{i}.initFcn to 'template_init_layer'.
To initialize the network call INIT.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:18:58
1726 bytes
TEMPLATE_INIT_NETWORK Template network initialization function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNINIT to see a list of other network initialization functions.
Syntax
net = template_init_network(net)
info = template_init_network(code)
Description
TEMPLATE_INIT_NETWORK(NET) takes:
NET - Neural network.
and returns the network with each layer updated.
TEMPLATE_INIT_NETWORK(CODE) return useful information for each CODE string:
'pnames' - Names of initialization parameters.
'pdefaults' - Default initialization parameters.
Network Use
To prepare a custom network to be initialized with
TEMPLATE_INIT_NETWORK set NET.initFcn to 'template_init_network'.
To initialize the network call INIT.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:18:58
2395 bytes
TEMPLATE_INIT_WB Template weight/bias initialization function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNINIT to see a list of other weight/bias initialization functions.
Syntax
W = template_init_wb(S,PR)
b = template_init_wb(S)
Description
RANDS(S,PR) takes,
S - number of neurons.
PR - Rx2 matrix of R input ranges.
and returns an S-by-R weight matrix of random values between -1 and 1.
Network Use
To prepare the weights and the bias of layer i of a custom network
to be initialized with TEMPLATE_INIT_WB:
1) Set NET.initFcn to 'initlay'.
(NET.initParam will automatically become INITLAY's default parameters.)
2) Set NET.layers{i}.initFcn to 'initwb'.
3) Set each NET.inputWeights{i,j}.initFcn to 'template_init_wb'.
Set each NET.layerWeights{i,j}.initFcn to 'template_init_wb';
Set each NET.biases{i}.initFcn to 'template_init_wb'.
To initialize the network call INIT.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:00
1624 bytes
TEMPLATE_LEARN Template learning function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNLEARN to see a list of other learning functions.
Syntax
[dW,LS] = template_learn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
[db,LS] = template_learn(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
info = template_learn(code)
Description
TEMPLATE_LEARN(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns
dW - SxR weight (or bias) change matrix.
LS - New learning state.
TEMPLATE_LEARN(CODE) return useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Network Use
To prepare the weights and the bias of layer i of a custom network
to train or adapt with TEMPLATE_LEARN:
1) Set NET.trainFcn to 'trainb' or NET.adaptFcn to 'trains'.
2) Set each NET.inputWeights{i,j}.learnFcn to 'template_learn'.
Set each NET.layerWeights{i,j}.learnFcn to 'template_learn'.
Set NET.biases{i}.learnFcn to 'template_learn'.
Each weight and bias learning parameter property will automatically
be set to TEMPLATE_LEARN's default parameters.
To train or adapt the network use TRAIN or ADAPT.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:00
3156 bytes
TEMPLATE_NET_INPUT Template net input function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNNETINPUT to see a list of other net input functions.
Syntax
N = template_net_input({Z1,Z2,...,Zn},FP)
dN_dZj = template_net_input('dz',j,Z,N,FP)
INFO = template_net_input(CODE)
Description
TEMPLATE_NET_INPUT({Z1,Z2,...,Zn},FP) takes these arguments,
Zi - SxQ matrices in a row cell array.
FP - Row cell array of function parameters (optional, ignored).
Returns element-wise product of Z1 to Zn.
TEMPLATE_NET_INPUT(code) returns information about this function.
These codes are defined:
'fullderiv' - Full NxSxQ derivative = 1, Element-wise SxQ derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
Network Use
To change a network so that a layer uses TEMPLATE_NET_INPUT, set
NET.layers{i}.netInputFcn to 'template_net_input'.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:02
3234 bytes
TEMPLATE_NEW_NETWORK Template new network function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNNETWORK to see a list of other new network functions.
Syntax
net = template_new_network(...args...)
Description
TEMPLATE_NEW_NETWORK(..args...) takes however many args you want
to define and returns a new network.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:02
1134 bytes
TEMPLATE_PERFORMANCE Template performance function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNPERFORMANCE to see a list of other performance functions.
Syntax
perf = template_performance(E,Y,X,FP)
dPerf_dy = template_performance('dy',E,Y,X,perf,FP);
dPerf_dx = template_performance('dx',E,Y,X,perf,FP);
info = template_performance(code)
Description
TEMPLATE_PERFORMANCE(E,Y,X,PP) takes E and optional function parameters,
E - Matrix or cell array of error vectors.
Y - Matrix or cell array of output vectors. (ignored).
X - Vector of all weight and bias values (ignored).
FP - Function parameters (ignored).
and returns the mean squared error.
TEMPLATE_PERFORMANCE('dy',E,Y,X,PERF,FP) returns derivative of PERF with respect to Y.
TEMPLATE_PERFORMANCE('dx',E,Y,X,PERF,FP) returns derivative of PERF with respect to X.
TEMPLATE_PERFORMANCE('name') returns the name of this function.
TEMPLATE_PERFORMANCE('pnames') returns the name of this function.
TEMPLATE_PERFORMANCE('pdefaults') returns the default function parameters.
Network Use
To prepare a custom network to be trained with TEMPLATE_PERFORMANCE set
NET.performFcn to 'template_performance'. This will automatically set
NET.performParam to the default functions parameters.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:04
4476 bytes
TEMPLATE_PROCESS Template data processing function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNPROCESS to see a list of other processing functions.
Syntax
[y,ps] = template_process(x,...1 to 3 args...)
[y,ps] = template_process(x,fp)
y = template_process('apply',x,ps)
x = template_process('reverse',y,ps)
dx_dy = template_process('dx',x,y,ps)
dx_dy = template_process('dx',x,[],ps)
name = template_process('name');
fp = template_process('pdefaults');
names = template_process('pnames');
template_process('pcheck',fp);
Description
TEMPLATE_PROCESS(X,...1 to 3 args...) takes X and optional parameters,
X - NxQ matrix or a 1xTS row cell array of NxQ matrices.
arg1 - Optional argument, default = ?
arg2 - Optional argument, default = ?
arg3 - Optional argument, default = ?
and returns,
Y - Each MxQ matrix (where M == N) (optional).
PS - Process settings, to allow consistent processing of values.
TEMPLATE_PROCESS(X,FP) takes parameters as struct: FP.arg1, etc.
TEMPLATE_PROCESS('apply',X,PS) returns Y, given X and settings PS.
TEMPLATE_PROCESS('reverse',Y,PS) returns X, given Y and settings PS.
TEMPLATE_PROCESS('dx',X,Y,PS) returns MxNxQ derivative of Y w/respect to X.
TEMPLATE_PROCESS('dx',X,[],PS) returns the derivative, less efficiently.
TEMPLATE_PROCESS('name') returns the name of this process method.
TEMPLATE_PROCESS('pdefaults') returns default process parameter structure.
TEMPLATE_PROCESS('pdesc') returns the process parameter descriptions.
TEMPLATE_PROCESS('pcheck',fp) throws an error if any parameter is illegal.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:04
4810 bytes
TEMPLATE_SEARCH Template line search function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNSEARCH to see a list of other line search functions.
Syntax
[a,gX,perf,retcode,delta,tol] = template_search(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,tol,ch_perf)
Description
TEMPLATE_SEARCH(NET,X,Pd,Tl,Ai,Q,TS,dX,gX,PERF,DPERF,DELTA,TOL,CH_PERF) takes these inputs,
NET - Neural network.
X - Vector containing current values of weights and biases.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
dX - Search direction vector.
gX - Gradient vector.
PERF - Performance value at current X.
DPERF - Slope of performance value at current X in direction of dX.
DELTA - Initial step size.
TOL - Tolerance on search.
CH_PERF - Change in performance on previous step.
and returns,
A - Step size which minimizes performance.
gX - Gradient at new minimum point.
PERF - Performance value at new minimum point.
RETCODE - Return code which has three elements. The first two elements correspond to
the number of function evaluations in the two stages of the search
The third element is a return code. These will have different meanings
for different search algorithms. Some may not be used in this function.
0 - normal; 1 - minimum step taken; 2 - maximum step taken;
3 - beta condition not met.
DELTA - New initial step size. Based on the current step size.
TOL - New tolerance on search.
Parameters used for the backstepping algorithm are:
alpha - Scale factor which determines sufficient reduction in perf.
beta - Scale factor which determines sufficiently large step size.
low_lim - Lower limit on change in step size.
up_lim - Upper limit on change in step size.
maxstep - Maximum step length.
minstep - Minimum step length.
scale_tol - Parameter which relates the tolerance tol to the initial step
size delta. Usually set to 20.
The defaults for these parameters are set in the training function which
calls it. See TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is an VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
Network Use
To prepare a custom network to be trained with TRAINCGF using
the line search function TEMPLATE_SEARCH:
1) Set NET.trainFcn to 'traincgf'.
This will set NET.trainParam to TRAINCGF's default parameters.
2) Set NET.trainParam.searchFcn to 'template_search'.
The SRCHBAC function can be used with any of the following
training functions: TRAINCGF, TRAINCGB, TRAINCGP, TRAINBFG, TRAINOSS.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:06
12183 bytes
TEMPLATE_TOPOLOGY Template topology function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNTEMPLATE to see a list of other topology functions.
Syntax
pos = template_topology(dim1,dim2,...,dimN)
Description
TEMPLATE_TOPOLOGY(DIM1,DIM2,...,DIMN) takes N arguments,
DIMi - Length of layer in dimension i.
and returns an NxS matrix of N coordinate vectors
where S is the product of DIM1*DIM2*...*DIMN.
Network Use
To change a network so a layer uses TEMPLATE_TOPOLOGY set
NET.layer{i}.topologyFcn to 'template_topology.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:06
1399 bytes
TEMPLATE_TRAIN Template train function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNSEARCH to see a list of other line search functions.
Syntax
[net,TR,Ac,El] = template_train(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = template_train(code)
Description
TEMPLATE_TRAIN(NET,Pd,Tl,Ai,Q,TS,VV) takes these inputs,
NET - Neural network.
Pd - Delayed inputs.
Tl - Layer targets.
Ai - Initial input conditions.
Q - Batch size.
TS - Time steps.
VV - Empty matrix [] or structure of validation vectors.
TV - Empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element Pd{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix or [].
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV or TV is not [], it must be a structure of vectors:
VV.PD, TV.PD - Validation/test delayed inputs.
VV.Tl, TV.Tl - Validation/test layer targets.
VV.Ai, TV.Ai - Validation/test initial input conditions.
VV.Q, TV.Q - Validation/test batch size.
VV.TS, TV.TS - Validation/test time steps.
Validation vectors are used to stop training early if the network
performance on the validation vectors fails to improve or remains
the same for MAX_FAIL epochs in a row. Test vectors are used as
a further check that the network is generalizing well, but do not
have any effect on training.
TEMPLATE_TRAIN(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
To prepare a custom network to be trained with TRAINB:
1) Set NET.trainFcn to 'template_train'.
(This will set NET.trainParam to TRAINB's default parameters.)
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:08
9577 bytes
TEMPLATE_TRANSFER Template transfer function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNTRANSFER to see a list of other transfer functions.
Syntax
A = template_transfer(N,FP)
dA_dN = template_transfer('dn',N,A,FP)
INFO = template_transfer(CODE)
Description
TEMPLATE_TRANSFER(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, the SxQ boolean matrix with 1's where N >= 0.
TEMPLATE_TRANSFER('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
TEMPLATE_TRANSFER('name') returns the name of this function.
TEMPLATE_TRANSFER('output',FP) returns the [min max] output range.
TEMPLATE_TRANSFER('active',FP) returns the [min max] active input range.
TEMPLATE_TRANSFER('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
TEMPLATE_TRANSFER('fpnames') returns the names of the function parameters.
TEMPLATE_TRANSFER('fpdefaults') returns the default function parameters.
Network Use
To change a network so a layer uses TEMPLATE_TRANSFER set
NET.layer{i}.transferFcn to 'template_transfer.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:08
3606 bytes
TEMPLATE_WEIGHT Template weight function.
WARNING - Future versions of the toolbox may require you to update
custom functions.
Directions for Customizing
1. Make a copy of this function with a new name
2. Edit your new function according to the code comments marked ***
3. Type HELP NNWEIGHT to see a list of other weight functions.
Syntax
Z = template_weight(W,P,FP)
info = template_weight(code)
dim = template_weight('size',S,R,FP)
dp = template_weight('dp',W,P,Z,FP)
dw = template_weight('dw',W,P,Z,FP)
Description
TEMPLATE_WEIGHT(W,P,FP) takes these inputs,
W - SxR weight matrix.
P - RxQ matrix of Q input (column) vectors.
FP - Row cell array of function parameters (optional, ignored).
and returns the SxQ dot product of W and P.
TEMPLATE_WEIGHT(code) returns information about this function.
These codes are defined:
'pfullderiv' - Input: Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'wfullderiv' - Weight: Reduced derivative = 2, Full derivative = 1, linear derivative = 0.
'name' - Full name.
'fpnames' - Returns names of function parameters.
'fpdefaults' - Returns default function parameters.
TEMPLATE_WEIGHT('size',S,R,FP) takes the layer dimension S, input dimention R,
and function parameters, and returns the weight size [SxR].
TEMPLATE_WEIGHT('dp',W,P,Z,FP) returns the derivative of Z with respect to P.
TEMPLATE_WEIGHT('dw',W,P,Z,FP) returns the derivative of Z with respect to W.
Network Use
To change a network so an input weight uses TEMPLATE_WEIGHT set
NET.inputWeight{i,j}.weightFcn to 'template_weight. For a layer weight
set NET.inputWeight{i,j}.weightFcn to 'template_weight.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:10
4766 bytes
TRAINB Batch training with weight & bias learning rules.
Syntax
[net,TR,Ac,El] = trainb(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trainb(code)
Description
TRAINB is not called directly. Instead it is called by TRAIN for
network's whose NET.trainFcn property is set to 'trainb'.
TRAINB trains a network with weight and bias learning rules
with batch updates. The weights and biases are updated at the end of
an entire pass through the input data.
TRAINB(NET,Pd,Tl,Ai,Q,TS,VV) takes these inputs,
NET - Neural network.
Pd - Delayed inputs.
Tl - Layer targets.
Ai - Initial input conditions.
Q - Batch size.
TS - Time steps.
VV - Empty matrix [] or structure of validation vectors.
TV - Empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINWB's training parameters,
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element Pd{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix or [].
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV or TV is not [], it must be a structure of vectors:
VV.PD, TV.PD - Validation/test delayed inputs.
VV.Tl, TV.Tl - Validation/test layer targets.
VV.Ai, TV.Ai - Validation/test initial input conditions.
VV.Q, TV.Q - Validation/test batch size.
VV.TS, TV.TS - Validation/test time steps.
Validation vectors are used to stop training early if the network
performance on the validation vectors fails to improve or remains
the same for MAX_FAIL epochs in a row. Test vectors are used as
a further check that the network is generalizing well, but do not
have any effect on training.
TRAINB(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINB by calling
NEWLIN.
To prepare a custom network to be trained with TRAINB:
1) Set NET.trainFcn to 'trainb'.
(This will set NET.trainParam to TRAINB's default parameters.)
2) Set each NET.inputWeights{i,j}.learnFcn to a learning function.
Set each NET.layerWeights{i,j}.learnFcn to a learning function.
Set each NET.biases{i}.learnFcn to a learning function.
(Weight and bias learning parameters will automatically be
set to default values for the given learning function.)
To train the network:
1) Set NET.trainParam properties to desired values.
2) Set weight and bias learning parameters to desired values.
3) Call TRAIN.
See NEWLIN for training examples.
Algorithm
Each weight and bias updates according to its learning function
after each epoch (one pass through the entire set of input vectors).
Training stops when any of these conditions are met:
1) The maximum number of EPOCHS (repetitions) is reached.
2) Performance has been minimized to the GOAL.
3) The maximum amount of TIME has been exceeded.
4) Validation performance has increase more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWP, NEWLIN, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:50
10767 bytes
TRAINBFG BFGS quasi-Newton backpropagation.
Syntax
[net,tr,Ac,El] = trainbfg(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trainbfg(code)
Description
TRAINBFG is a network training function that updates weight and
bias values according to the BFGS quasi-Newton method.
TRAINBFG(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Either empty matrix [] or structure of validation vectors.
TV - Either empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINBFG's training parameters,
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.goal 0 Performance goal
net.trainParam.time inf Maximum time to train in seconds
net.trainParam.min_grad 1e-6 Minimum performance gradient
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.searchFcn 'srchcha' Name of line search routine to use.
Parameters related to line search methods (not all used for all methods):
net.trainParam.scal_tol 20 Divide into delta to determine tolerance for linear search.
net.trainParam.alpha 0.001 Scale factor which determines sufficient reduction in perf.
net.trainParam.beta 0.1 Scale factor which determines sufficiently large step size.
net.trainParam.delta 0.01 Initial step size in interval location step.
net.trainParam.gama 0.1 Parameter to avoid small reductions in performance. Usually set
to 0.1. (See use in SRCH_CHA.)
net.trainParam.low_lim 0.1 Lower limit on change in step size.
net.trainParam.up_lim 0.5 Upper limit on change in step size.
net.trainParam.maxstep 100 Maximum step length.
net.trainParam.minstep 1.0e-6 Minimum step length.
net.trainParam.bmax 26 Maximum step size.
net.trainParam.batch_frag 0 In case of multiple batches they are considered independent.
Any non zero value implies a fragmented batch, so final layers
conditions of a previous trained epoch are used as initial
conditions for next epoch.
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV is not [], it must be a structure of validation vectors,
VV.PD - Validation delayed inputs.
VV.Tl - Validation layer targets.
VV.Ai - Validation initial input conditions.
VV.Q - Validation batch size.
VV.TS - Validation time steps.
which is used to stop training early if the network performance
on the validation vectors fails to improve or remains the same
for MAX_FAIL epochs in a row.
If TV is not [], it must be a structure of validation vectors,
TV.PD - Validation delayed inputs.
TV.Tl - Validation layer targets.
TV.Ai - Validation initial input conditions.
TV.Q - Validation batch size.
TV.TS - Validation time steps.
which is used to test the generalization capability of the
trained network.
TRAINBFG(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINBFG with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINBFG:
1) Set NET.trainFcn to 'trainbfg'.
This will set NET.trainParam to TRAINBFG's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINBFG.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
P = [0 1 2 3 4 5];
T = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINBFG
network training function is to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'trainbfg');
a = sim(net,P)
% Train and Retest the Network
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,P,T);
a = sim(net,P)
See NEWFF, NEWCF, and NEWELM for other examples.
Algorithm
TRAINBFG can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to the following:
X = X + a*dX;
where dX is the search direction. The parameter a is selected
to minimize the performance along the search direction. The line
search function searchFcn is used to locate the minimum point.
The first search direction is the negative of the gradient of performance.
In succeeding iterations the search direction is computed
according to the following formula:
dX = -H\gX;
where gX is the gradient and H is an approximate Hessian matrix.
See page 119 of Gill, Murray & Wright (Practical Optimization 1981) for
a more detailed discussion of the BFGS quasi-Newton method.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGDM, TRAINGDA, TRAINGDX, TRAINLM,
TRAINRP, TRAINCGF, TRAINCGB, TRAINSCG, TRAINCGP,
TRAINOSS.
References
Gill, Murray & Wright, Practical Optimization, 1981.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:52
17953 bytes
TRAINBR Bayesian Regulation backpropagation.
Syntax
[net,tr,Ac,El] = trainbr(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trainbr(code)
Description
TRAINBR is a network training function that updates the weight and
bias values according to Levenberg-Marquardt optimization. It
minimizes a combination of squared errors and weights
and, then determines the correct combination so as to produce a
network which generalizes well. The process is called Bayesian
regularization.
TRAINBR(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Either empty matrix [] or structure of validation vectors.
TV - Either empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
TR.mu - Adaptive mu value.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINLM's training parameters,
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.mu 0.005 Marquardt adjustment parameter
net.trainParam.mu_dec 0.1 Decrease factor for mu
net.trainParam.mu_inc 10 Increase factor for mu
net.trainParam.mu_max 1e-10 Maximum value for mu
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.mem_reduc 1 Factor to use for memory/speed trade off.
net.trainParam.min_grad 1e-10 Minimum performance gradient
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV is not [], it must be a structure of validation vectors,
VV.PD - Validation delayed inputs.
VV.Tl - Validation layer targets.
VV.Ai - Validation initial input conditions.
VV.Q - Validation batch size.
VV.TS - Validation time steps.
which is used to stop training early if the network performance
on the validation vectors fails to improve or remains the same
for MAX_FAIL epochs in a row.
If TV is not [], it must be a structure of validation vectors,
TV.PD - Validation delayed inputs.
TV.Tl - Validation layer targets.
TV.Ai - Validation initial input conditions.
TV.Q - Validation batch size.
TV.TS - Validation time steps.
which is used to test the generalization capability of the
trained network.
TRAINBR(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINBR with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINBR:
1) Set NET.trainFcn to 'trainlm'.
This will set NET.trainParam to TRAINBR's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINBR.
See NEWFF, NEWCF, and NEWELM for examples.
Example
Here is a problem consisting of inputs p and targets t that we would
like to solve with a network. It involves fitting a noisy sine wave.
p = [-1:.05:1];
t = sin(2*pi*p)+0.1*randn(size(p));
Here a two-layer feed-forward network is created. The network's
input ranges from [-1 to 1]. The first layer has 20 TANSIG
neurons, and the second layer has one PURELIN neuron. The TRAINBR
network training function is to be used. The plot of the
resulting network output should show a smooth response, without
overfitting.
% Create a Network
net=newff([-1 1],[20,1],{'tansig','purelin'},'trainbr');
% Train and Test the Network
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net = train(net,p,t);
a = sim(net,p)
figure
plot(p,a,p,t,'+')
Algorithm
TRAINBR can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Bayesian regularization minimizes a linear combination of squared
errors and weights. It also modifies the linear combination
so that at the end of training the resulting network has good
generalization qualities.
See MacKay (Neural Computation, vol. 4, no. 3, 1992, pp. 415-447)
and Foresee and Hagan (Proceedings of the International Joint
Conference on Neural Networks, June, 1997) for more detailed
discussions of Bayesian regularization.
This Bayesian regularization takes place within the Levenberg-Marquardt
algorithm. Backpropagation is used to calculate the Jacobian jX of
performance PERF with respect to the weight and bias variables X.
Each variable is adjusted according to Levenberg-Marquardt,
jj = jX * jX
je = jX * E
dX = -(jj+I*mu) \ je
where E is all errors and I is the identity matrix.
The adaptive value MU is increased by MU_INC until the change shown above
results in a reduced performance value. The change is then made to
the network and mu is decreased by MU_DEC.
The parameter MEM_REDUC indicates how to use memory and speed to
calculate the Jacobian jX. If MEM_REDUC is 1, then TRAINLM runs
the fastest, but can require a lot of memory. Increasing MEM_REDUC
to 2 cuts some of the memory required by a factor of two, but
slows TRAINLM somewhat. Higher values continue to decrease the
amount of memory needed and increase the training times.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) MU exceeds MU_MAX.
6) Validation performance has increase more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGDM, TRAINGDA, TRAINGDX, TRAINLM,
TRAINRP, TRAINCGF, TRAINCGB, TRAINSCG, TRAINCGP,
TRAINBFG.
References
MacKay, Neural Computation, vol. 4, no. 3, 1992, pp. 415-447.
Foresee and Hagan, Proceedings of the International Joint
Conference on Neural Networks, June, 1997.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:52
15606 bytes
TRAINC Cyclical order incremental training w/learning functions.
Syntax
[net,tr,Ac,El] = trainc(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trainc(code)
Description
TRAINC is not called directly. Instead it is called by TRAIN for
network's whose NET.trainFcn property is set to 'trainc'.
TRAINC trains a network with weight and bias learning rules with
incremental updates after each presentation of an input. Inputs
are presented in cyclic order.
TRAINC(NET,Pd,Tl,Ai,Q,TS,VV) takes these inputs,
NET - Neural network.
Pd - Delayed inputs.
Tl - Layer targets.
Ai - Initial input conditions.
Q - Batch size.
TS - Time steps.
VV - Ignored.
TV - Ignored.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
Ac - Collective layer outputs.
El - Layer errors.
Training occurs according to the TRAINC's training parameters
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element Pd{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix or [].
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
TRAINC does not implement validation or test vectors, so arguments
VV and TV are ignored.
TRAINC(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINC by calling
NEWP.
To prepare a custom network to be trained with TRAINC:
1) Set NET.trainFcn to 'trainc'.
(This will set NET.trainParam to TRAINC default parameters.)
2) Set each NET.inputWeights{i,j}.learnFcn to a learning function.
Set each NET.layerWeights{i,j}.learnFcn to a learning function.
Set each NET.biases{i}.learnFcn to a learning function.
(Weight and bias learning parameters will automatically be
set to default values for the given learning function.)
To train the network:
1) Set NET.trainParam properties to desired values.
2) Set weight and bias learning parameters to desired values.
3) Call TRAIN.
See NEWP for training examples.
Algorithm
For each epoch, each vector (or sequence) is presented in order
to the network with the weight and bias values updated accordingly
after each individual presentation.
Training stops when any of these conditions are met:
1) The maximum number of EPOCHS (repetitions) is reached.
2) Performance has been minimized to the GOAL.
3) The maximum amount of TIME has been exceeded.
See also NEWP, NEWLIN, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:54
10314 bytes
TRAINCGB Conjugate gradient backpropagation with Powell-Beale restarts.
Syntax
[net,tr,Ac,El] = traincgb(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = traincgb(code)
Description
TRAINCGB is a network training function that updates weight and
bias values according to the conjugate gradient backpropagation
with Powell-Beale restarts.
TRAINCGB(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Either empty matrix [] or structure of validation vectors.
TV - Either empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINCGB's training parameters,
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.goal 0 Performance goal
net.trainParam.time inf Maximum time to train in seconds
net.trainParam.min_grad 1e-6 Minimum performance gradient
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.searchFcn 'srchcha' Name of line search routine to use.
Parameters related to line search methods (not all used for all methods):
net.trainParam.scal_tol 20 Divide into delta to determine tolerance for linear search.
net.trainParam.alpha 0.001 Scale factor which determines sufficient reduction in perf.
net.trainParam.beta 0.1 Scale factor which determines sufficiently large step size.
net.trainParam.delta 0.01 Initial step size in interval location step.
net.trainParam.gama 0.1 Parameter to avoid small reductions in performance. Usually set
to 0.1. (See use in SRCH_CHA.)
net.trainParam.low_lim 0.1 Lower limit on change in step size.
net.trainParam.up_lim 0.5 Upper limit on change in step size.
net.trainParam.maxstep 100 Maximum step length.
net.trainParam.minstep 1.0e-6 Minimum step length.
net.trainParam.bmax 26 Maximum step size.
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV is not [], it must be a structure of validation vectors,
VV.PD - Validation delayed inputs.
VV.Tl - Validation layer targets.
VV.Ai - Validation initial input conditions.
VV.Q - Validation batch size.
VV.TS - Validation time steps.
which is used to stop training early if the network performance
on the validation vectors fails to improve or remains the same
for MAX_FAIL epochs in a row.
If TV is not [], it must be a structure of validation vectors,
TV.PD - Validation delayed inputs.
TV.Tl - Validation layer targets.
TV.Ai - Validation initial input conditions.
TV.Q - Validation batch size.
TV.TS - Validation time steps.
which is used to test the generalization capability of the
trained network.
TRAINCGB(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINCGB with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINCGB:
1) Set NET.trainFcn to 'traincgb'.
This will set NET.trainParam to TRAINCGB's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINCGB.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINCGB
network training function is to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'traincgb');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
See NEWFF, NEWCF, and NEWELM for other examples.
Algorithm
TRAINCGB can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to the following:
X = X + a*dX;
where dX is the search direction. The parameter a is selected
to minimize the performance along the search direction. The line
search function searchFcn is used to locate the minimum point.
The first search direction is the negative of the gradient of performance.
In succeeding iterations the search direction is computed from the new
gradient and the previous search direction according to the
formula:
dX = -gX + dX_old*Z;
where gX is the gradient. The parameter Z can be computed in several
different ways. The Powell-Beale variation of conjugate gradient
is distinguished by two features. First, the algorithm uses a test
to determine when to reset the search direction to the negative of
the gradient. Second, the search direction is computed from the
negative gradient, the previous search direction, and the last
search direction before the previous reset.
See Powell, Mathematical Programming, Vol. 12 (1977) pp. 241-254, for
a more detailed discussion of the algorithm.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGDM, TRAINGDA, TRAINGDX, TRAINLM,
TRAINCGP, TRAINCGF, TRAINCGB, TRAINSCG, TRAINOSS,
TRAINBFG.
References
Powell, Mathematical Programming, Vol. 12 (1977) pp. 241-254
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:54
16282 bytes
TRAINCGF Conjugate gradient backpropagation with Fletcher-Reeves updates.
Syntax
[net,tr,Ac,El] = traincgf(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = traincgf(code)
Description
TRAINCGF is a network training function that updates weight and
bias values according to the conjugate gradient backpropagation
with Fletcher-Reeves updates.
TRAINCGF(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Either empty matrix [] or structure of validation vectors.
TV - Either empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINCGF's training parameters,
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.goal 0 Performance goal
net.trainParam.time inf Maximum time to train in seconds
net.trainParam.min_grad 1e-6 Minimum performance gradient
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.searchFcn 'srchcha' Name of line search routine to use.
Parameters related to line search methods (not all used for all methods):
net.trainParam.scal_tol 20 Divide into delta to determine tolerance for linear search.
net.trainParam.alpha 0.001 Scale factor which determines sufficient reduction in perf.
net.trainParam.beta 0.1 Scale factor which determines sufficiently large step size.
net.trainParam.delta 0.01 Initial step size in interval location step.
net.trainParam.gama 0.1 Parameter to avoid small reductions in performance. Usually set
to 0.1. (See use in SRCH_CHA.)
net.trainParam.low_lim 0.1 Lower limit on change in step size.
net.trainParam.up_lim 0.5 Upper limit on change in step size.
net.trainParam.maxstep 100 Maximum step length.
net.trainParam.minstep 1.0e-6 Minimum step length.
net.trainParam.bmax 26 Maximum step size.
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV is not [], it must be a structure of validation vectors,
VV.PD - Validation delayed inputs.
VV.Tl - Validation layer targets.
VV.Ai - Validation initial input conditions.
VV.Q - Validation batch size.
VV.TS - Validation time steps.
which is used to stop training early if the network performance
on the validation vectors fails to improve or remains the same
for MAX_FAIL epochs in a row.
If TV is not [], it must be a structure of validation vectors,
TV.PD - Validation delayed inputs.
TV.Tl - Validation layer targets.
TV.Ai - Validation initial input conditions.
TV.Q - Validation batch size.
TV.TS - Validation time steps.
which is used to test the generalization capability of the
trained network.
TRAINCGF(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINCGF with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINCGF:
1) Set NET.trainFcn to 'traincgf'.
This will set NET.trainParam to TRAINCGF's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINCGF.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINCGF
network training function is to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'traincgf');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
See NEWFF, NEWCF, and NEWELM for other examples.
Algorithm
TRAINCGF can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to the following:
X = X + a*dX;
where dX is the search direction. The parameter a is selected
to minimize the performance along the search direction. The line
search function searchFcn is used to locate the minimum point.
The first search direction is the negative of the gradient of performance.
In succeeding iterations the search direction is computed from the new
gradient and the previous search direction, according to the
formula:
dX = -gX + dX_old*Z;
where gX is the gradient. The parameter Z can be computed in several
different ways. For the Fletcher-Reeves variation of conjugate gradient
it is computed according to
Z=normnew_sqr/norm_sqr;
where norm_sqr is the norm square of the previous gradient and
normnew_sqr is the norm square of the current gradient.
See page 78 of Scales (Introduction to Non-Linear Optimization 1985) for
a more detailed discussion of the algorithm.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGDM, TRAINGDA, TRAINGDX, TRAINLM,
TRAINCGP, TRAINCGB, TRAINSCG, TRAINOSS,
TRAINBFG.
References
Scales, Introduction to Non-Linear Optimization, 1985.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:56
15428 bytes
TRAINCGP Conjugate gradient backpropagation with Polak-Ribiere updates.
Syntax
[net,tr,Ac,El] = traincgp(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = traincgp(code)
Description
TRAINCGP is a network training function that updates weight and
bias values according to the conjugate gradient backpropagation
with Polak-Ribiere updates.
TRAINCGP(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Either empty matrix [] or structure of validation vectors.
TV - Either empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINCGP's training parameters,
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.goal 0 Performance goal
net.trainParam.time inf Maximum time to train in seconds
net.trainParam.min_grad 1e-6 Minimum performance gradient
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.searchFcn 'srchcha' Name of line search routine to use.
Parameters related to line search methods (not all used for all methods):
net.trainParam.scal_tol 20 Divide into delta to determine tolerance for linear search.
net.trainParam.alpha 0.001 Scale factor which determines sufficient reduction in perf.
net.trainParam.beta 0.1 Scale factor which determines sufficiently large step size.
net.trainParam.delta 0.01 Initial step size in interval location step.
net.trainParam.gama 0.1 Parameter to avoid small reductions in performance. Usually set
to 0.1. (See use in SRCH_CHA.)
net.trainParam.low_lim 0.1 Lower limit on change in step size.
net.trainParam.up_lim 0.5 Upper limit on change in step size.
net.trainParam.maxstep 100 Maximum step length.
net.trainParam.minstep 1.0e-6 Minimum step length.
net.trainParam.bmax 26 Maximum step size.
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV is not [], it must be a structure of validation vectors,
VV.PD - Validation delayed inputs.
VV.Tl - Validation layer targets.
VV.Ai - Validation initial input conditions.
VV.Q - Validation batch size.
VV.TS - Validation time steps.
which is used to stop training early if the network performance
on the validation vectors fails to improve or remains the same
for MAX_FAIL epochs in a row.
If TV is not [], it must be a structure of validation vectors,
TV.PD - Validation delayed inputs.
TV.Tl - Validation layer targets.
TV.Ai - Validation initial input conditions.
TV.Q - Validation batch size.
TV.TS - Validation time steps.
which is used to test the generalization capability of the
trained network.
TRAINCGP(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINCGP with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINCGP:
1) Set NET.trainFcn to 'traincgp'.
This will set NET.trainParam to TRAINCGP's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINCGP.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINCGP
network training function is to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'traincgp');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
See NEWFF, NEWCF, and NEWELM for other examples.
Algorithm
TRAINCGP can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to the following:
X = X + a*dX;
where dX is the search direction. The parameter a is selected
to minimize the performance along the search direction. The line
search function searchFcn is used to locate the minimum point.
The first search direction is the negative of the gradient of performance.
In succeeding iterations the search direction is computed from the new
gradient and the previous search direction according to the
formula:
dX = -gX + dX_old*Z;
where gX is the gradient. The parameter Z can be computed in several
different ways. For the Polak-Ribiere variation of conjugate gradient
it is computed according to:
Z = ((gX - gX_old)'*gX)/norm_sqr;
where norm_sqr is the norm square of the previous gradient and
gX_old is the gradient on the previous iteration.
See page 78 of Scales (Introduction to Non-Linear Optimization 1985) for
a more detailed discussion of the algorithm.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGDM, TRAINGDA, TRAINGDX, TRAINLM,
TRAINRP, TRAINCGF, TRAINCGB, TRAINSCG, TRAINOSS,
TRAINBFG.
References
Scales, Introduction to Non-Linear Optimization, 1985.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:56
15429 bytes
TRAINGD Gradient descent backpropagation.
Syntax
[net,tr,Ac,El] = traingd(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = traingd(code)
Description
TRAINGD is a network training function that updates weight and
bias values according to gradient descent.
TRAINGD(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Empty matrix [] or structure of validation vectors.
TV - Empty matrix [] or structure of test vectors.
and returns:
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINGD's training parameters
shown here with their default values:
net.trainParam.epochs 10 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.lr 0.01 Learning rate
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.min_grad 1e-10 Minimum performance gradient
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV or TV is not [], it must be a structure of vectors:
VV.PD, TV.PD - Validation/test delayed inputs.
VV.Tl, TV.Tl - Validation/test layer targets.
VV.Ai, TV.Ai - Validation/test initial input conditions.
VV.Q, TV.Q - Validation/test batch size.
VV.TS, TV.TS - Validation/test time steps.
Validation vectors are used to stop training early if the network
performance on the validation vectors fails to improve or remains
the same for MAX_FAIL epochs in a row. Test vectors are used as
a further check that the network is generalizing well, but do not
have any effect on training.
TRAINGD(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINGD with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINGD:
1) Set NET.trainFcn to 'traingd'.
This will set NET.trainParam to TRAINGD's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINGD.
See NEWFF, NEWCF, and NEWELM for examples.
Algorithm
TRAINGD can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to gradient descent:
dX = lr * dperf/dX
Training stops when any of these conditions occurs:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGDM, TRAINGDA, TRAINGDX, TRAINLM.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:58
9327 bytes
TRAINGDA Gradient descent with adaptive lr backpropagation.
Syntax
[net,tr,Ac,El] = traingda(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = traingda(code)
Description
TRAINGDA is a network training function that updates weight and
bias values according to gradient descent with adaptive
learning rate.
TRAINGDA(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Empty matrix [] or structure of validation vectors.
TV - Empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
TR.lr - Adaptive learning rate.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINGDA's training parameters,
shown here with their default values:
net.trainParam.epochs 10 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.lr 0.01 Learning rate
net.trainParam.lr_inc 1.05 Ratio to increase learning rate
net.trainParam.lr_dec 0.7 Ratio to decrease learning rate
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.max_perf_inc 1.04 Maximum performance increase
net.trainParam.min_grad 1e-10 Minimum performance gradient
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV or TV is not [], it must be a structure of vectors:
VV.PD, TV.PD - Validation/test delayed inputs.
VV.Tl, TV.Tl - Validation/test layer targets.
VV.Ai, TV.Ai - Validation/test initial input conditions.
VV.Q, TV.Q - Validation/test batch size.
VV.TS, TV.TS - Validation/test time steps.
Validation vectors are used to stop training early if the network
performance on the validation vectors fails to improve or remains
the same for MAX_FAIL epochs in a row. Test vectors are used as
a further check that the network is generalizing well, but do not
have any effect on training.
TRAINGDA(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINGDA with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINGDA:
1) Set NET.trainFcn to 'traingda'.
This will set NET.trainParam to TRAINGDA's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINGDA.
See NEWFF, NEWCF, and NEWELM for examples.
Algorithm
TRAINGDA can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
DPERF with respect to the weight and bias variables X. Each
variable is adjusted according to gradient descent:
dX = lr*dperf/dX
Each of epoch, if performance decreases toward the goal, then
the learning rate is increased by the factor lr_inc. If
performance increases by more than the factor max_perf_inc,
the learning rate is adjusted by the factor lr_dec and the
change, which increased the performance, is not made.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGD, TRAINGDM, TRAINGDX, TRAINLM.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:20:58
11173 bytes
TRAINGDM Gradient descent with momentum backpropagation.
Syntax
[net,tr,Ac,El] = traingdm(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = traingdm(code)
Description
TRAINGDM is a network training function that updates weight and
bias values according to gradient descent with momentum.
TRAINGDM(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs:
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Empty matrix [] or structure of validation vectors.
TV - Empty matrix [] or structure of test vectors.
and returns:
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINGDM's training parameters
shown here with their default values:
net.trainParam.epochs 10 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.lr 0.01 Learning rate
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.mc 0.9 Momentum constant.
net.trainParam.min_grad 1e-10 Minimum performance gradient
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV or TV is not [], it must be a structure of vectors:
VV.PD, TV.PD - Validation/test delayed inputs.
VV.Tl, TV.Tl - Validation/test layer targets.
VV.Ai, TV.Ai - Validation/test initial input conditions.
VV.Q, TV.Q - Validation/test batch size.
VV.TS, TV.TS - Validation/test time steps.
Validation vectors are used to stop training early if the network
performance on the validation vectors fails to improve or remains
the same for MAX_FAIL epochs in a row. Test vectors are used as
a further check that the network is generalizing well, but do not
have any effect on training.
TRAINGDM(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINGDM with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINGDM:
1) Set NET.trainFcn to 'traingdm'.
This will set NET.trainParam to TRAINGDM's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINGDM.
See NEWFF, NEWCF, and NEWELM for examples.
Algorithm
TRAINGDM can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to gradient descent with
momentum,
dX = mc*dXprev + lr*(1-mc)*dperf/dX
where dXprev is the previous change to the weight or bias.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increase more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGD, TRAINGDA, TRAINGDX, TRAINLM.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:00
10046 bytes
TRAINGDX Gradient descent w/momentum & adaptive lr backpropagation.
Syntax
[net,tr,Ac,El] = traingdx(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = traingdx(code)
Description
TRAINGDX is a network training function that updates weight and
bias values according to gradient descent momentum and an
adaptive learning rate.
TRAINGDX(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Empty matrix [] or structure of validation vectors.
TV - Empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
TR.lr - Adaptive learning rate.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINGDX's training parameters
shown here with their default values:
net.trainParam.epochs 10 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.lr 0.01 Learning rate
net.trainParam.lr_inc 1.05 Ratio to increase learning rate
net.trainParam.lr_dec 0.7 Ratio to decrease learning rate
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.max_perf_inc 1.04 Maximum performance increase
net.trainParam.mc 0.9 Momentum constant.
net.trainParam.min_grad 1e-10 Minimum performance gradient
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is an VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV or TV is not [], it must be a structure of vectors:
VV.PD, TV.PD - Validation/test delayed inputs.
VV.Tl, TV.Tl - Validation/test layer targets.
VV.Ai, TV.Ai - Validation/test initial input conditions.
VV.Q, TV.Q - Validation/test batch size.
VV.TS, TV.TS - Validation/test time steps.
Validation vectors are used to stop training early if the network
performance on the validation vectors fails to improve or remains
the same for MAX_FAIL epochs in a row. Test vectors are used as
a further check that the network is generalizing well, but do not
have any effect on training.
TRAINGDX(CODE) return useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINGDX with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINGDX:
1) Set NET.trainFcn to 'traingdx'.
This will set NET.trainParam to TRAINGDX's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINGDX.
See NEWFF, NEWCF, and NEWELM for examples.
Algorithm
TRAINGDX can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to the gradient descent
with momentum.
dX = mc*dXprev + lr*mc*dperf/dX
where dXprev is the previous change to the weight or bias.
For each epoch, if performance decreases toward the goal, then
the learning rate is increased by the factor lr_inc. If
performance increases by more than the factor max_perf_inc,
the learning rate is adjusted by the factor lr_dec and the
change, which increased the performance, is not made.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increase more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGD, TRAINGDM, TRAINGDA, TRAINLM.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:00
11693 bytes
TRAINLM Levenberg-Marquardt backpropagation.
Syntax
[net,tr] = trainlm(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trainlm(code)
Description
TRAINLM is a network training function that updates weight and
bias values according to Levenberg-Marquardt optimization.
TRAINLM(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Either empty matrix [] or structure of validation vectors.
TV - Either empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
TR.mu - Adaptive mu value.
Training occurs according to the TRAINLM's training parameters
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.mem_reduc 1 Factor to use for memory/speed trade off.
net.trainParam.min_grad 1e-10 Minimum performance gradient
net.trainParam.mu 0.001 Initial Mu
net.trainParam.mu_dec 0.1 Mu decrease factor
net.trainParam.mu_inc 10 Mu increase factor
net.trainParam.mu_max 1e10 Maximum Mu
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV or TV is not [], it must be a structure of vectors:
VV.PD, TV.PD - Validation/test delayed inputs.
VV.Tl, TV.Tl - Validation/test layer targets.
VV.Ai, TV.Ai - Validation/test initial input conditions.
VV.Q, TV.Q - Validation/test batch size.
VV.TS, TV.TS - Validation/test time steps.
Validation vectors are used to stop training early if the network
performance on the validation vectors fails to improve or remains
the same for MAX_FAIL epochs in a row. Test vectors are used as
a further check that the network is generalizing well, but do not
have any effect on training.
TRAINLM(CODE) return useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINLM with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINLM:
1) Set NET.trainFcn to 'trainlm'.
This will set NET.trainParam to TRAINLM's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINLM.
See NEWFF, NEWCF, and NEWELM for examples.
Algorithm
TRAINLM can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate the Jacobian jX of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to Levenberg-Marquardt,
jj = jX * jX
je = jX * E
dX = -(jj+I*mu) \ je
where E is all errors and I is the identity matrix.
The adaptive value MU is increased by MU_INC until the change above
results in a reduced performance value. The change is then made to
the network and mu is decreased by MU_DEC.
The parameter MEM_REDUC indicates how to use memory and speed to
calculate the Jacobian jX. If MEM_REDUC is 1, then TRAINLM runs
the fastest, but can require a lot of memory. Increasing MEM_REDUC
to 2, cuts some of the memory required by a factor of two, but
slows TRAINLM somewhat. Higher values continue to decrease the
amount of memory needed and increase training times.
Training stops when any of these conditions occurs:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) MU exceeds MU_MAX.
6) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGD, TRAINGDM, TRAINGDA, TRAINGDX.
ApplicationRoot\WavixIV\neural501
14-Nov-2005 19:18:20
14182 bytes
TRAINOSS One step secant backpropagation.
Syntax
[net,tr,Ac,El] = trainoss(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trainoss(code)
Description
TRAINOSS is a network training function that updates weight and
bias values according to the one step secant method.
TRAINOSS(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Either empty matrix [] or structure of validation vectors.
TV - Either empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINOSS's training parameters,
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.goal 0 Performance goal
net.trainParam.time inf Maximum time to train in seconds
net.trainParam.min_grad 1e-6 Minimum performance gradient
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.searchFcn 'srchcha' Name of line search routine to use.
Parameters related to line search methods (not all used for all methods):
net.trainParam.scale_tol 20 Divide into delta to determine tolerance for linear search.
net.trainParam.alpha 0.001 Scale factor which determines sufficient reduction in perf.
net.trainParam.beta 0.1 Scale factor which determines sufficiently large step size.
net.trainParam.delta 0.01 Initial step size in interval location step.
net.trainParam.gama 0.1 Parameter to avoid small reductions in performance. Usually set
to 0.1. (See use in SRCH_CHA.)
net.trainParam.low_lim 0.1 Lower limit on change in step size.
net.trainParam.up_lim 0.5 Upper limit on change in step size.
net.trainParam.maxstep 100 Maximum step length.
net.trainParam.minstep 1.0e-6 Minimum step length.
net.trainParam.bmax 26 Maximum step size.
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV is not [], it must be a structure of validation vectors,
VV.PD - Validation delayed inputs.
VV.Tl - Validation layer targets.
VV.Ai - Validation initial input conditions.
VV.Q - Validation batch size.
VV.TS - Validation time steps.
which is used to stop training early if the network performance
on the validation vectors fails to improve or remains the same
for MAX_FAIL epochs in a row.
If TV is not [], it must be a structure of validation vectors,
TV.PD - Validation delayed inputs.
TV.Tl - Validation layer targets.
TV.Ai - Validation initial input conditions.
TV.Q - Validation batch size.
TV.TS - Validation time steps.
which is used to test the generalization capability of the
trained network.
TRAINOSS(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINOSS with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINOSS:
1) Set NET.trainFcn to 'trainoss'.
This will set NET.trainParam to TRAINCGP's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINOSS.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINOSS
network training function is to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'trainoss');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
See NEWFF, NEWCF, and NEWELM for other examples.
Algorithm
TRAINOSS can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to the following:
X = X + a*dX;
where dX is the search direction. The parameter a is selected
to minimize the performance along the search direction. The line
search function searchFcn is used to locate the minimum point.
The first search direction is the negative of the gradient of performance.
In succeeding iterations the search direction is computed from the new
gradient and the previous steps and gradients according to the following
formula:
dX = -gX + Ac*X_step + Bc*dgX;
where gX is the gradient, X_step is the change in the weights on the
previous iteration, and dgX is the change in the gradient from the
last iteration.
See Battiti (Neural Computation, vol. 4, 1992, pp. 141-166) for
a more detailed discussion of the one step secant algorithm.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGDM, TRAINGDA, TRAINGDX, TRAINLM,
TRAINRP, TRAINCGF, TRAINCGB, TRAINSCG, TRAINCGP,
TRAINBFG.
References
Battiti, Neural Computation, vol. 4, 1992, pp. 141-166.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:02
14962 bytes
TRAINR Random order incremental training w/learning functions.
Syntax
[net,tr,Ac,El] = trainr(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trainr(code)
Description
TRAINR is not called directly. Instead it is called by TRAIN for
network's whose NET.trainFcn property is set to 'trainr'.
TRAINR trains a network with weight and bias learning rules with
incremental updates after each presentation of an input. Inputs
are presented in random order.
TRAINR(NET,Pd,Tl,Ai,Q,TS,VV) takes these inputs,
NET - Neural network.
Pd - Delayed inputs.
Tl - Layer targets.
Ai - Initial input conditions.
Q - Batch size.
TS - Time steps.
VV - Ignored.
TV - Ignored.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
Ac - Collective layer outputs.
El - Layer errors.
Training occurs according to the TRAINR's training parameters
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element Pd{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix or [].
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
TRAINR does not implement validation or test vectors, so arguments
VV and TV are ignored.
TRAINR(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINR by calling
NEWC or NEWSOM.
To prepare a custom network to be trained with TRAINR:
1) Set NET.trainFcn to 'trainr'.
(This will set NET.trainParam to TRAINR's default parameters.)
2) Set each NET.inputWeights{i,j}.learnFcn to a learning function.
Set each NET.layerWeights{i,j}.learnFcn to a learning function.
Set each NET.biases{i}.learnFcn to a learning function.
(Weight and bias learning parameters will automatically be
set to default values for the given learning function.)
To train the network:
1) Set NET.trainParam properties to desired values.
2) Set weight and bias learning parameters to desired values.
3) Call TRAIN.
See NEWC and NEWSOM for training examples.
Algorithm
For each epoch, all training vectors (or sequences) are each
presented once in a different random order with the network and
weight and bias values updated accordingly after each individual
presentation.
Training stops when any of these conditions are met:
1) The maximum number of EPOCHS (repetitions) is reached.
2) Performance has been minimized to the GOAL.
3) The maximum amount of TIME has been exceeded.
See also NEWP, NEWLIN, TRAIN.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:02
10203 bytes
TRAINRP RPROP backpropagation.
Syntax
[net,tr,Ac,El] = trainrp(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trainrp(code)
Description
TRAINRP is a network training function that updates weight and
bias values according to the resilient backpropagation algorithm
(RPROP).
TRAINRP(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Either empty matrix [] or structure of validation vectors.
TV - Either empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINRP's training parameters
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.goal 0 Performance goal
net.trainParam.time inf Maximum time to train in seconds
net.trainParam.min_grad 1e-6 Minimum performance gradient
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.lr 0.01 Learning rate
net.trainParam.delt_inc 1.2 Increment to weight change
net.trainParam.delt_dec 0.5 Decrement to weight change
net.trainParam.delta0 0.07 Initial weight change
net.trainParam.deltamax 50.0 Maximum weight change
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV is not [], it must be a structure of validation vectors,
VV.PD - Validation delayed inputs.
VV.Tl - Validation layer targets.
VV.Ai - Validation initial input conditions.
VV.Q - Validation batch size.
VV.TS - Validation time steps.
which is used to stop training early if the network performance
on the validation vectors fails to improve or remains the same
for MAX_FAIL epochs in a row.
If TV is not [], it must be a structure of validation vectors,
TV.PD - Validation delayed inputs.
TV.Tl - Validation layer targets.
TV.Ai - Validation initial input conditions.
TV.Q - Validation batch size.
TV.TS - Validation time steps.
which is used to test the generalization capability of the
trained network.
TRAINRP(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINRP with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINRP:
1) Set NET.trainFcn to 'trainrp'.
This will set NET.trainParam to TRAINRP's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINRP.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINRP
network training function is to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'trainrp');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
See NEWFF, NEWCF, and NEWELM for other examples.
Algorithm
TRAINRP can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X. Each
variable is adjusted according to the following:
dX = deltaX.*sign(gX);
where the elements of deltaX are all initialized to delta0 and
gX is the gradient. At each iteration the elements of deltaX
are modified. If an element of gX changes sign from one
iteration to the next, then the corresponding element of
deltaX is decreased by delta_dec. If an element of gX
maintains the same sign from one iteration to the next,
then the corresponding element of deltaX is increased by
delta_inc. See Reidmiller, Proceedings of the IEEE Int. Conf.
on NN (ICNN) San Francisco, 1993, pp. 586-591.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGDM, TRAINGDA, TRAINGDX, TRAINLM,
TRAINCGP, TRAINCGF, TRAINCGB, TRAINSCG, TRAINOSS,
TRAINBFG.
References
Reidmiller, Proceedings of the IEEE Int. Conf. on NN (ICNN)
San Francisco, 1993, pp. 586-591.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:04
12500 bytes
TRAINS Sequential order incremental training w/learning functions.
Syntax
[net,TR,Ac,El] = trains(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trains(code)
Description
TRAINS is not called directly. Instead it is called by TRAIN for
network's whose NET.trainFcn property is set to 'trains'.
TRAINS trains a network with weight and bias learning rules with
sequential updates. The sequence of inputs is presented to the network
with updates occuring after each time step.
This incremental training algorithm is commonly used for adaptive
applications.
TRAINS takes these inputs:
NET - Neural network.
Pd - Delayed inputs.
Tl - Layer targets.
Ai - Initial input conditions.
Q - Batch size.
TS - Time steps.
VV - Ignored.
TV - Ignored.
and after training the network with its weight and bias
learning functions returns:
NET - Updated network.
TR - Training record.
TR.timesteps - Number of time steps.
TR.perf - performance for each time step.
Ac - Collective layer outputs.
El - Layer errors.
Training occurs according to the TRAINS' training parameter
shown here with its default value:
net.trainParam.passes 1 Number of times to present sequence
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a ZijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is an VixQ matrix or [].
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Ac - Nlx(LD+TS) cell array, each element Ac{i,k} is an SixQ matrix.
El - NlxTS cell array, each element El{i,k} is an SixQ matrix or [].
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Zij = Ri * length(net.inputWeights{i,j}.delays)
TRAINS(CODE) return useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINS for adapting
by calling NEWP or NEWLIN.
To prepare a custom network to adapt with TRAINS:
1) Set NET.adaptFcn to 'trains'.
(This will set NET.adaptParam to TRAINS' default parameters.)
2) Set each NET.inputWeights{i,j}.learnFcn to a learning function.
Set each NET.layerWeights{i,j}.learnFcn to a learning function.
Set each NET.biases{i}.learnFcn to a learning function.
(Weight and bias learning parameters will automatically be
set to default values for the given learning function.)
To allow the network to adapt:
1) Set weight and bias learning parameters to desired values.
2) Call ADAPT.
See NEWP and NEWLIN for adaption examples.
Algorithm
Each weight and bias is updated according to its learning function
after each time step in the input sequence.
See also NEWP, NEWLIN, TRAIN, TRAINB, TRAINC, TRAINR.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:04
6819 bytes
TRAINSCG Scaled conjugate gradient backpropagation.
Syntax
[net,tr,Ac,El] = trainscg(net,Pd,Tl,Ai,Q,TS,VV,TV)
info = trainscg(code)
Description
TRAINSCG is a network training function that updates weight and
bias values according to the scaled conjugate gradient method.
TRAINSCG(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
NET - Neural network.
Pd - Delayed input vectors.
Tl - Layer target vectors.
Ai - Initial input delay conditions.
Q - Batch size.
TS - Time steps.
VV - Either empty matrix [] or structure of validation vectors.
TV - Either empty matrix [] or structure of test vectors.
and returns,
NET - Trained network.
TR - Training record of various values over each epoch:
TR.epoch - Epoch number.
TR.perf - Training performance.
TR.vperf - Validation performance.
TR.tperf - Test performance.
Ac - Collective layer outputs for last epoch.
El - Layer errors for last epoch.
Training occurs according to the TRAINSCG's training parameters
shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.goal 0 Performance goal
net.trainParam.time inf Maximum time to train in seconds
net.trainParam.min_grad 1e-6 Minimum performance gradient
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.sigma 5.0e-5 Determines change in weight for second derivative approximation.
net.trainParam.lambda 5.0e-7 Parameter for regulating the indefiniteness of the Hessian.
Dimensions for these variables are:
Pd - NoxNixTS cell array, each element P{i,j,ts} is a DijxQ matrix.
Tl - NlxTS cell array, each element P{i,ts} is a VixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Where
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
If VV is not [], it must be a structure of validation vectors,
VV.PD - Validation delayed inputs.
VV.Tl - Validation layer targets.
VV.Ai - Validation initial input conditions.
VV.Q - Validation batch size.
VV.TS - Validation time steps.
which is used to stop training early if the network performance
on the validation vectors fails to improve or remains the same
for MAX_FAIL epochs in a row.
If TV is not [], it must be a structure of validation vectors,
TV.PD - Validation delayed inputs.
TV.Tl - Validation layer targets.
TV.Ai - Validation initial input conditions.
TV.Q - Validation batch size.
TV.TS - Validation time steps.
which is used to test the generalization capability of the
trained network.
TRAINSCG(CODE) returns useful information for each CODE string:
'pnames' - Names of training parameters.
'pdefaults' - Default training parameters.
Network Use
You can create a standard network that uses TRAINSCG with
NEWFF, NEWCF, or NEWELM.
To prepare a custom network to be trained with TRAINSCG:
1) Set NET.trainFcn to 'trainscg'.
This will set NET.trainParam to TRAINSCG's default parameters.
2) Set NET.trainParam properties to desired values.
In either case, calling TRAIN with the resulting network will
train the network with TRAINSCG.
Examples
Here is a problem consisting of inputs P and targets T that we would
like to solve with a network.
p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];
Here a two-layer feed-forward network is created. The network's
input ranges from [0 to 10]. The first layer has two TANSIG
neurons, and the second layer has one LOGSIG neuron. The TRAINSCG
network training function is to be used.
% Create and Test a Network
net = newff([0 5],[2 1],{'tansig','logsig'},'trainscg');
a = sim(net,p)
% Train and Retest the Network
net.trainParam.epochs = 50;
net.trainParam.show = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)
See NEWFF, NEWCF, and NEWELM for other examples.
Algorithm
TRAINSCG can train any network as long as its weight, net input,
and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
PERF with respect to the weight and bias variables X.
The scaled conjugate gradient algorithm is based on conjugate
directions, as in TRAINCGP, TRAINCGF and TRAINCGB, but this
algorithm does not perform a line search at each iteration.
See Moller (Neural Networks, vol. 6, 1993, pp. 525-533) for a more
detailed discussion of the scaled conjugate gradient algorithm.
Training stops when any of these conditions occur:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).
See also NEWFF, NEWCF, TRAINGDM, TRAINGDA, TRAINGDX, TRAINLM,
TRAINRP, TRAINCGF, TRAINCGB, TRAINBFG, TRAINCGP,
TRAINOSS.
References
Moller, Neural Networks, vol. 6, 1993, pp. 525-533.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:06
13408 bytes
TRIBAS Triangular basis transfer function.
Syntax
A = tribas(N,FP)
dA_dN = tribas('dn',N,A,FP)
INFO = tribas(CODE)
Description
TRIBAS is a neural transfer function. Transfer functions
calculate a layer's output from its net input.
TRIBAS(N,FP) takes N and optional function parameters,
N - SxQ matrix of net input (column) vectors.
FP - Struct of function parameters (ignored).
and returns A, an SxQ matrix of the triangular basis function
applied to each element of N.
TRIBAS('dn',N,A,FP) returns SxQ derivative of A w-respect to N.
If A or FP are not supplied or are set to [], FP reverts to
the default parameters, and A is calculated from N.
TRIBAS('name') returns the name of this function.
TRIBAS('output',FP) returns the [min max] output range.
TRIBAS('active',FP) returns the [min max] active input range.
TRIBAS('fullderiv') returns 1 or 0, whether DA_DN is SxSxQ or SxQ.
TRIBAS('fpnames') returns the names of the function parameters.
TRIBAS('fpdefaults') returns the default function parameters.
Examples
Here we create a plot of the TRIBAS transfer function.
n = -5:0.1:5;
a = tribas(n);
plot(n,a)
Here we assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'tribas';
Algorithm
a = tribas(n) = 1 - abs(n), if -1 <= n <= 1
= 0, otherwise
See also SIM, RADBAS.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:21:18
2641 bytes
UPDATENET Creates a current network object from an old network structure.
NET = UPDATE(S)
S - Structure with fields of old neural network object.
Returns
NET - New neural network
This function is caled by NETWORK/LOADOBJ to update old neural
network objects when they are loaded from an M-file.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:22:38
3541 bytes
VEC2IND Transform vectors to indices.
Syntax
ind = vec2ind(vec)
Description
IND2VEC and VEC2IND allow indices to be represented
either by themselves or as vectors containing a 1 in the
row of the index they represent.
VEC2IND(VEC) takes one argument,
VEC - Matrix of vectors, each containing a single 1.
and returns the indices of the 1's.
Examples
Here four vectors (containing only one 1 each) are defined
and the indices of the 1's are found.
vec = [1 0 0 0; 0 0 1 0; 0 1 0 1]
ind = vec2ind(vec)
See also IND2VEC.
ApplicationRoot\WavixIV\neural501
22-Dec-2005 12:19:24
811 bytes
ADAPT Allow a neural network to adapt.
Syntax
[net,Y,E,Pf,Af,tr] = adapt(NET,P,T,Pi,Ai)
Description
[NET,Y,E,Pf,Af,tr] = ADAPT(NET,P,T,Pi,Ai) takes,
NET - Network.
P - Network inputs.
T - Network targets, default = zeros.
Pi - Initial input delay conditions, default = zeros.
Ai - Initial layer delay conditions, default = zeros.
and returns the following after applying the adapt function
NET.adaptFcn with the adaption parameters NET.adaptParam:
NET - Updated network.
Y - Network outputs.
E - Network errors.
Pf - Final input delay conditions.
Af - Final layer delay conditions.
TR - Training record (epoch and perf).
Note that T is optional and only needs to be used for networks
that require targets. Pi and Pf are also optional and need
only to be used for networks that have input or layer delays.
ADAPT's signal arguments can have two formats: cell array or matrix.
The cell array format is easiest to describe. It is most
convenient to be used for networks with multiple inputs and outputs,
and allows sequences of inputs to be presented:
P - NixTS cell array, each element P{i,ts} is an RixQ matrix.
T - NtxTS cell array, each element T{i,ts} is an VixQ matrix.
Pi - NixID cell array, each element Pi{i,k} is an RixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Y - NOxTS cell array, each element Y{i,ts} is an UixQ matrix.
E - NtxTS cell array, each element E{i,ts} is an VixQ matrix.
Pf - NixID cell array, each element Pf{i,k} is an RixQ matrix.
Af - NlxLD cell array, each element Af{i,k} is an SixQ matrix.
Where:
Ni = net.numInputs
Nl = net.numLayers
No = net.numOutputs
Nt = net.numTargets
ID = net.numInputDelays
LD = net.numLayerDelays
TS = number of time steps
Q = batch size
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Ui = net.outputs{i}.size
Vi = net.targets{i}.size
The columns of Pi, Pf, Ai, and Af are ordered from oldest delay
condition to most recent:
Pi{i,k} = input i at time ts=k-ID.
Pf{i,k} = input i at time ts=TS+k-ID.
Ai{i,k} = layer output i at time ts=k-LD.
Af{i,k} = layer output i at time ts=TS+k-LD.
The matrix format can be used if only one time step is to be
simulated (TS = 1). It is convenient for network's with
only one input and output, but can be used with networks that
have more.
Each matrix argument is found by storing the elements of
the corresponding cell array argument into a single matrix:
P - (sum of Ri)xQ matrix
T - (sum of Vi)xQ matrix
Pi - (sum of Ri)x(ID*Q) matrix.
Ai - (sum of Si)x(LD*Q) matrix.
Y - (sum of Ui)xQ matrix.
E - (sum of Vi)xQ matrix
Pf - (sum of Ri)x(ID*Q) matrix.
Af - (sum of Si)x(LD*Q) matrix.
Examples
Here two sequences of 12 steps (where T1 is known to depend
on P1) are used to define the operation of a filter.
p1 = {-1 0 1 0 1 1 -1 0 -1 1 0 1};
t1 = {-1 -1 1 1 1 2 0 -1 -1 0 1 1};
Here NEWLIN is used to create a layer with an input range
of [-1 1]), one neuron, input delays of 0 and 1, and a
learning rate of 0.5. The linear layer is then simulated.
net = newlin([-1 1],1,[0 1],0.5);
Here the network adapts for one pass through the sequence.
The network's mean squared error is displayed. (Since this
is the first call of ADAPT the default Pi is used.)
[net,y,e,pf] = adapt(net,p1,t1);
mse(e)
Note the errors are quite large. Here the network adapts
to another 12 time steps (using the previous Pf as the
new initial delay conditions.)
p2 = {1 -1 -1 1 1 -1 0 0 0 1 -1 -1};
t2 = {2 0 -2 0 2 0 -1 0 0 1 0 -1};
[net,y,e,pf] = adapt(net,p2,t2,pf);
mse(e)
Here the network adapts through 100 passes through
the entire sequence.
p3 = [p1 p2];
t3 = [t1 t2];
net.adaptParam.passes = 100;
[net,y,e] = adapt(net,p3,t3);
mse(e)
The error after 100 passes through the sequence is very
small - the network has adapted to the relationship
between the input and target signals.
Algorithm
ADAPT calls the function indicated by NET.adaptFcn, using the
adaption parameter values indicated by NET.adaptParam.
Given an input sequence with TS steps the network is
updated as follows. Each step in the sequence of inputs is
presented to the network one at a time. The network's weight and
bias values are updated after each step, before the next step in
the sequence is presented. Thus the network is updated TS times.
See also INIT, REVERT, SIM, TRAIN.
ApplicationRoot\WavixIV\neural501\@network
17-Aug-2004 16:42:12
9120 bytes
DISP Display a neural network's properties.
Syntax
disp(net)
Description
DISP(NET) displays a network's properties.
Examples
Here a perceptron is created and displayed.
net = newp([-1 1; 0 2],3);
disp(net)
See also DISPLAY, SIM, INIT, TRAIN, ADAPT
ApplicationRoot\WavixIV\neural501\@network
22-Dec-2005 12:18:46
5517 bytes
DISPLAY Display the name and properties of a neural network variable.
Syntax
display(net)
Description
DISPLAY(NET) displays a network variable's name and properties.
Examples
Here a perceptron variable is defined and displayed.
net = newp([-1 1; 0 2],3);
display(net)
DISPLAY is automatically called as follows:
net
See also DISP, SIM, INIT, TRAIN, ADAPT
ApplicationRoot\WavixIV\neural501\@network
22-Dec-2005 12:18:48
700 bytes
GENSIM Generate a SIMULINK block to simulate a neural network.
Syntax
gensim(net,st)
Description
GENSIM(NET,ST) takes these inputs,
NET - Neural network.
ST - Sample time (default = 1).
and creates a SIMULINK system containing a block which
simulates neural network NET with a sampling time of ST.
If NET has no input or layer delays (NET.numInputDelays
and NET.numLayerDelays are both 0) then you can use -1 for ST to
get a continuously sampling network.
Example
net = newff([0 1],[5 1]);
gensim(net)
ApplicationRoot\WavixIV\neural501\@network
24-Mar-2004 14:42:58
20409 bytes
INIT Initialize a neural network.
Syntax
net = init(net)
Description
INIT(NET) returns neural network NET with weight and bias values
updated according to the network initialization function, indicated
by NET.initFcn, and the parameter values, indicated by NET.initParam.
Examples
Here a perceptron is created with a 2-element input (with ranges
of 0 to 1, and -2 to 2) and 1 neuron. Once it is created we can display
the neuron's weights and bias.
net = newp([0 1;-2 2],1);
net.iw{1,1}
net.b{1}
Training the perceptron alters its weight and bias values.
P = [0 1 0 1; 0 0 1 1];
T = [0 0 0 1];
net = train(net,P,T);
net.iw{1,1}
net.b{1}
INIT reinitializes those weight and bias values.
net = init(net);
net.iw{1,1}
net.b{1}
The weights and biases are zeros again, which are the initial values
used by perceptron networks (see NEWP).
Algorithm
INIT calls NET.initFcn to initialize the weight and bias values
according to the parameter values NET.initParam.
Typically, NET.initFcn is set to 'initlay' which initializes each
layer's weights and biases according to its NET.layers{i}.initFcn.
Backpropagation networks have NET.layers{i}.initFcn set to 'initnw'
which calculates the weight an bias values for layer i using the
Nguyen-Widrow initialization method.
Other networks have NET.layers{i}.initFcn set to 'initwb', which
initializes each weight and bias with its own initialization function.
The most common weight and bias initialization function is RANDS
which generates random values between -1 and 1.
See also REVERT, SIM, ADAPT, TRAIN, INITLAY, INITNW, INITWB, RANDS.
ApplicationRoot\WavixIV\neural501\@network
14-Apr-2002 16:28:54
2647 bytes
LOADOBJ Load a network object.
ApplicationRoot\WavixIV\neural501\@network
14-Apr-2002 16:29:20
231 bytes
NETWORK Create a custom neural network.
Synopsis
net = network
net = network(numInputs,numLayers,biasConnect,inputConnect,
layerConnect,outputConnect,targetConnect)
Description
NETWORK creates new custom networks. It is used to create
networks that are then customized by functions such as NEWP,
NEWLIN, NEWFF, etc.
NETWORK takes these optional arguments (shown with default values):
numInputs - Number of inputs, 0.
numLayers - Number of layers, 0.
biasConnect - numLayers-by-1 Boolean vector, zeros.
inputConnect - numLayers-by-numInputs Boolean matrix, zeros.
layerConnect - numLayers-by-numLayers Boolean matrix, zeros.
outputConnect - 1-by-numLayers Boolean vector, zeros.
targetConnect - 1-by-numLayers Boolean vector, zeros.
and returns,
NET - New network with the given property values.
Properties
Architecture properties:
net.numInputs: 0 or a positive integer.
Number of inputs.
net.numLayers: 0 or a positive integer.
Number of layers.
net.biasConnect: numLayer-by-1 Boolean vector.
If net.biasConnect(i) is 1 then the layer i has a bias and
net.biases{i} is a structure describing that bias.
net.inputConnect: numLayer-by-numInputs Boolean vector.
If net.inputConnect(i,j) is 1 then layer i has a weight coming from
input j and net.inputWeights{i,j} is a structure describing that weight.
net.layerConnect: numLayer-by-numLayers Boolean vector.
If net.layerConnect(i,j) is 1 then layer i has a weight coming from
layer j and net.layerWeights{i,j} is a structure describing that weight.
net.outputConnect: 1-by-numLayers Boolean vector.
If net.outputConnect(i) is 1 then the network has an output from
layer i and net.outputs{i} is a structure describing that output.
net.targetConnect: 1-by-numLayers Boolean vector.
if net.targetConnect(i) is 1 then the network has a target from
layer i and net.targets{i} is a structure describing that target.
net.numOutputs: 0 or a positive integer. Read only.
Number of network outputs according to net.outputConnect.
net.numTargets: 0 or a positive integer. Read only.
Number of targets according to net.targetConnect.
net.numInputDelays: 0 or a positive integer. Read only.
Maximum input delay according to all net.inputWeight{i,j}.delays.
net.numLayerDelays: 0 or a positive number. Read only.
Maximum layer delay according to all net.layerWeight{i,j}.delays.
Subobject structure properties:
net.inputs: numInputs-by-1 cell array.
net.inputs{i} is a structure defining input i:
net.layers: numLayers-by-1 cell array.
net.layers{i} is a structure defining layer i:
net.biases: numLayers-by-1 cell array.
if net.biasConnect(i) is 1, then net.biases{i} is a structure
defining the bias for layer i.
net.inputWeights: numLayers-by-numInputs cell array.
if net.inputConnect(i,j) is 1, then net.inputWeights{i,j} is a
structure defining the weight to layer i from input j.
net.layerWeights: numLayers-by-numLayers cell array.
if net.layerConnect(i,j) is 1, then net.layerWeights{i,j} is a
structure defining the weight to layer i from layer j.
net.outputs: 1-by-numLayers cell array.
if net.outputConnect(i) is 1, then net.outputs{i} is a structure
defining the network output from layer i.
net.targets: 1-by-numLayers cell array.
if net.targetConnect(i) is 1, then net.targets{i} is a structure
defining the network target to layer i.
Function properties:
net.adaptFcn: name of a network adaption function or ''.
net.initFcn: name of a network initialization function or ''.
net.performFcn: name of a network performance function or ''.
net.trainFcn: name of a network training function or ''.
net.gradientFcn: name of a network gradient function or ''. ODJ
Parameter properties:
net.adaptParam: network adaption parameters.
net.initParam: network initialization parameters.
net.performParam: network performance parameters.
net.trainParam: network training parameters.
net.gradientParam: network gradient parameters. ODJ
Weight and bias value properties:
net.IW: numLayers-by-numInputs cell array of input weight values.
net.LW: numLayers-by-numLayers cell array of layer weight values.
net.b: numLayers-by-1 cell array of bias values.
Other properties:
net.userdata: structure you can use to store useful values.
Examples
Here is how the code to create a network without any inputs and layers,
and then set its number of inputs and layer to 1 and 2 respectively.
net = network
net.numInputs = 1
net.numLayers = 2
Here is the code to create the same network with one line of code.
net = network(1,2)
Here is the code to create a 1 input, 2 layer, feed-forward network.
Only the first layer will have a bias. An input weight will
connect to layer 1 from input 1. A layer weight will connect
to layer 2 from layer 1. Layer 2 will be a network output,
and have a target.
net = network(1,2,[1;0],[1; 0],[0 0; 1 0],[0 1],[0 1])
We can then see the properties of subobjects as follows:
net.inputs{1}
net.layers{1}, net.layers{2}
net.biases{1}
net.inputWeights{1,1}, net.layerWeights{2,1}
net.outputs{2}
net.targets{2}
We can get the weight matrices and bias vector as follows:
net.iw{1,1}, net.iw{2,1}, net.b{1}
We can alter the properties of any of these subobjects. Here
we change the transfer functions of both layers:
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'logsig';
Here we change the number of elements in input 1 to 2, by setting
each element's range:
net.inputs{1}.range = [0 1; -1 1];
Next we can simulate the network for a 2-element input vector:
p = [0.5; -0.1];
y = sim(net,p)
See also INIT, REVERT, SIM, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501\@network
22-Dec-2005 12:18:48
9969 bytes
REVERT Revert network weight and bias values.
Syntax
net = revert(net)
Description
REVERT(NET) returns neural network NET with weight and bias values
restored to the values generated the last time the network was
initialized.
If the network has been altered so that it has different weight
and bias connections or different input or layer sizes, then REVERT
cannot set the weights and biases to their previous values and they
will be set to zeros instead.
Examples
Here a perceptron is created with a 2-element input (with ranges
of 0 to 1, and -2 to 2) and 1 neuron. Once it is created we can display
the neuron's weights and bias.
net = newp([0 1;-2 2],1);
The initial network has weights and biases with zero values.
net.iw{1,1}, net.b{1}
We can change these values as follows.
net.iw{1,1} = [1 2]; net.b{1} = 5;
net.iw{1,1}, net.b{1}
We can recover the network's initial values as follows.
net = revert(net);
net.iw{1,1}, net.b{1}
See also INIT, SIM, ADAPT, TRAIN.
ApplicationRoot\WavixIV\neural501\@network
14-Apr-2002 16:29:18
2523 bytes
SIM Simulate a neural network.
Syntax
[Y,Pf,Af,E,perf] = sim(net,P,Pi,Ai,T)
[Y,Pf,Af,E,perf] = sim(net,{Q TS},Pi,Ai,T)
[Y,Pf,Af,E,perf] = sim(net,Q,Pi,Ai,T)
Description
SIM simulates neural networks.
[Y,Pf,Af,E,perf] = SIM(net,P,Pi,Ai,T) takes,
NET - Network.
P - Network inputs.
Pi - Initial input delay conditions, default = zeros.
Ai - Initial layer delay conditions, default = zeros.
T - Network targets, default = zeros.
and returns:
Y - Network outputs.
Pf - Final input delay conditions.
Af - Final layer delay conditions.
E - Network errors.
perf - Network performance.
Note that arguments Pi, Ai, Pf, and Af are optional and
need only be used for networks that have input or layer delays.
SIM's signal arguments can have two formats: cell array or matrix.
The cell array format is easiest to describe. It is most
convenient for networks with multiple inputs and outputs,
and allows sequences of inputs to be presented:
P - NixTS cell array, each element P{i,ts} is an RixQ matrix.
Pi - NixID cell array, each element Pi{i,k} is an RixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
T - NtxTS cell array, each element P{i,ts} is an VixQ matrix.
Y - NOxTS cell array, each element Y{i,ts} is a UixQ matrix.
Pf - NixID cell array, each element Pf{i,k} is an RixQ matrix.
Af - NlxLD cell array, each element Af{i,k} is an SixQ matrix.
E - NtxTS cell array, each element P{i,ts} is an VixQ matrix.
Where:
Ni = net.numInputs
Nl = net.numLayers,
No = net.numOutputs
ID = net.numInputDelays
LD = net.numLayerDelays
TS = number of time steps
Q = batch size
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Ui = net.outputs{i}.size
The columns of Pi, Pf, Ai, and Af are ordered from oldest delay
condition to most recent:
Pi{i,k} = input i at time ts=k-ID.
Pf{i,k} = input i at time ts=TS+k-ID.
Ai{i,k} = layer output i at time ts=k-LD.
Af{i,k} = layer output i at time ts=TS+k-LD.
The matrix format can be used if only one time step is to be
simulated (TS = 1). It is convenient for networks with only
one input and output, but can also be used with networks that
have more.
Each matrix argument is found by storing the elements of
the corresponding cell array argument into a single matrix:
P - (sum of Ri)xQ matrix
Pi - (sum of Ri)x(ID*Q) matrix.
Ai - (sum of Si)x(LD*Q) matrix.
T - (sum of Vi)xQ matrix
Y - (sum of Ui)xQ matrix.
Pf - (sum of Ri)x(ID*Q) matrix.
Af - (sum of Si)x(LD*Q) matrix.
E - (sum of Vi)xQ matrix
[Y,Pf,Af] = SIM(net,{Q TS},Pi,Ai) is used for networks
which do not have an input, such as Hopfield networks
when cell array notation is used.
Examples
Here NEWP is used to create a perceptron layer with a
2-element input (with ranges of [0 1]), and a single neuron.
net = newp([0 1;0 1],1);
Here the perceptron is simulated for an individual vector,
a batch of 3 vectors, and a sequence of 3 vectors.
p1 = [.2; .9]; a1 = sim(net,p1)
p2 = [.2 .5 .1; .9 .3 .7]; a2 = sim(net,p2)
p3 = {[.2; .9] [.5; .3] [.1; .7]}; a3 = sim(net,p3)
Here NEWLIND is used to create a linear layer with a 3-element
input, 2 neurons.
net = newlin([0 2;0 2;0 2],2,[0 1]);
Here the linear layer is simulated with a sequence of 2 input
vectors using the default initial input delay conditions (all zeros).
p1 = {[2; 0.5; 1] [1; 1.2; 0.1]};
[y1,pf] = sim(net,p1)
Here the layer is simulated for 3 more vectors using the previous
final input delay conditions as the new initial delay conditions.
p2 = {[0.5; 0.6; 1.8] [1.3; 1.6; 1.1] [0.2; 0.1; 0]};
[y2,pf] = sim(net,p2,pf)
Here NEWELM is used to create an Elman network with a 1-element
input, and a layer 1 with 3 TANSIG neurons followed by a layer 2
with 2 PURELIN neurons. Because it is an Elman network it has a
tap delay line with a delay of 1 going from layer 1 to layer 1.
net = newelm([0 1],[3 2],{'tansig','purelin'});
Here the Elman network is simulated for a sequence of 3 values
using default initial delay conditions.
p1 = {0.2 0.7 0.1};
[y1,pf,af] = sim(net,p1)
Here the network is simulated for 4 more values, using the previous
final delay conditions as the new initial delay conditions.
p2 = {0.1 0.9 0.8 0.4};
[y2,pf,af] = sim(net,p2,pf,af)
Algorithm
SIM uses these properties to simulate a network NET.
NET.numInputs, NET.numLayers
NET.outputConnect, NET.biasConnect
NET.inputConnect, NET.layerConnect
These properties determine the network's weight and bias values,
and the number of delays associated with each weight:
NET.inputWeights{i,j}.value
NET.layerWeights{i,j}.value
NET.layers{i}.value
NET.inputWeights{i,j}.delays
NET.layerWeights{i,j}.delays
These function properties indicate how SIM applies weight and
bias values to inputs to get each layer's output:
NET.inputWeights{i,j}.weightFcn
NET.layerWeights{i,j}.weightFcn
NET.layers{i}.netInputFcn
NET.layers{i}.transferFcn
See Chapter 2 for more information on network simulation.
See also INIT, REVERT, ADAPT, TRAIN
ApplicationRoot\WavixIV\neural501\@network
22-Dec-2005 12:18:50
10133 bytes
SUBSASGN Assign fields of a neural network.
ApplicationRoot\WavixIV\neural501\@network
03-Oct-2006 15:51:30
71061 bytes
SUBSASGN Assign fields of a neural network.
ApplicationRoot\WavixIV\neural501\@network
03-Oct-2006 15:49:08
71472 bytes
SUBSREF Reference fields of a neural network.
ApplicationRoot\WavixIV\neural501\@network
14-Apr-2002 16:29:12
1078 bytes
TRAIN Train a neural network.
Syntax
[net,tr,Y,E,Pf,Af] = train(NET,P,T,Pi,Ai,VV,TV)
Description
TRAIN trains a network NET according to NET.trainFcn and
NET.trainParam.
TRAIN(NET,P,T,Pi,Ai) takes,
NET - Network.
P - Network inputs.
T - Network targets, default = zeros.
Pi - Initial input delay conditions, default = zeros.
Ai - Initial layer delay conditions, default = zeros.
VV - Structure of validation vectors, default = [].
TV - Structure of test vectors, default = [].
and returns,
NET - New network.
TR - Training record (epoch and perf).
Y - Network outputs.
E - Network errors.
Pf - Final input delay conditions.
Af - Final layer delay conditions.
Note that T is optional and need only be used for networks
that require targets. Pi and Pf are also optional and need
only be used for networks that have input or layer delays.
Optional arguments VV and TV are described below.
TRAIN's signal arguments can have two formats: cell array or matrix.
The cell array format is easiest to describe. It is most
convenient for networks with multiple inputs and outputs,
and allows sequences of inputs to be presented:
P - NixTS cell array, each element P{i,ts} is an RixQ matrix.
T - NtxTS cell array, each element P{i,ts} is an VixQ matrix.
Pi - NixID cell array, each element Pi{i,k} is an RixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Y - NOxTS cell array, each element Y{i,ts} is an UixQ matrix.
E - NtxTS cell array, each element P{i,ts} is an VixQ matrix.
Pf - NixID cell array, each element Pf{i,k} is an RixQ matrix.
Af - NlxLD cell array, each element Af{i,k} is an SixQ matrix.
Where:
Ni = net.numInputs
Nl = net.numLayers
Nt = net.numTargets
ID = net.numInputDelays
LD = net.numLayerDelays
TS = number of time steps
Q = batch size
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
The columns of Pi, Pf, Ai, and Af are ordered from the oldest delay
condition to most recent:
Pi{i,k} = input i at time ts=k-ID.
Pf{i,k} = input i at time ts=TS+k-ID.
Ai{i,k} = layer output i at time ts=k-LD.
Af{i,k} = layer output i at time ts=TS+k-LD.
The matrix format can be used if only one time step is to be
simulated (TS = 1). It is convenient for network's with
only one input and output, but can be used with networks that
have more.
Each matrix argument is found by storing the elements of
the corresponding cell array argument into a single matrix:
P - (sum of Ri)xQ matrix
T - (sum of Vi)xQ matrix
Pi - (sum of Ri)x(ID*Q) matrix.
Ai - (sum of Si)x(LD*Q) matrix.
Y - (sum of Ui)xQ matrix.
E - (sum of Vi)xQ matrix
Pf - (sum of Ri)x(ID*Q) matrix.
Af - (sum of Si)x(LD*Q) matrix.
If VV and TV are supplied they should be an empty matrix [] or
a structure with the following fields:
VV.P, TV.P - Validation/test inputs.
VV.T, TV.T - Validation/test targets, default = zeros.
VV.Pi, TV.Pi - Validation/test initial input delay conditions, default = zeros.
VV.Ai, TV.Ai - Validation/test layer delay conditions, default = zeros.
The validation vectors are used to stop training early if further
training on the primary vectors will hurt generalization to the
validation vectors. Test vector performance can be used to measure
how well the network generalizes beyond primary and validation vectors.
If VV.T, VV.Pi, or VV.Ai are set to an empty matrix or cell array,
default values will be used. The same is true for TV.T, TV.Pi, TV.Ai.
Not all training functions support validation and test vectors.
Those that do not ignore the VV and TV arguments.
Examples
Here input P and targets T define a simple function which
we can plot:
p = [0 1 2 3 4 5 6 7 8];
t = [0 0.84 0.91 0.14 -0.77 -0.96 -0.28 0.66 0.99];
plot(p,t,'o')
Here NEWFF is used to create a two layer feed forward network.
The network will have an input (ranging from 0 to 8), followed
by a layer of 10 TANSIG neurons, followed by a layer with 1
PURELIN neuron. TRAINLM backpropagation is used. The network
is also simulated.
net = newff([0 8],[10 1],{'tansig' 'purelin'},'trainlm');
y1 = sim(net,p)
plot(p,t,'o',p,y1,'x')
Here the network is trained for up to 50 epochs to a error goal of
0.01, and then resimulated.
net.trainParam.epochs = 50;
net.trainParam.goal = 0.01;
net = train(net,p,t);
y2 = sim(net,p)
plot(p,t,'o',p,y1,'x',p,y2,'*')
Algorithm
TRAIN calls the function indicated by NET.trainFcn, using the
training parameter values indicated by NET.trainParam.
Typically one epoch of training is defined as a single presentation
of all input vectors to the network. The network is then updated
according to the results of all those presentations.
Training occurs until a maximum number of epochs occurs, the
performance goal is met, or any other stopping condition of the
function NET.trainFcn occurs.
Some training functions depart from this norm by presenting only
one input vector (or sequence) each epoch. An input vector (or sequence)
is chosen randomly each epoch from concurrent input vectors (or sequences).
NEWC and NEWSOM return networks that use TRAINR, a training function
that does this.
See also INIT, REVERT, SIM, ADAPT
ApplicationRoot\WavixIV\neural501\@network
17-Aug-2004 16:42:14
12475 bytes
ACTIVE Returns number of structures in cell array.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:30:24
278 bytes
CHECKAI Check Ai dimensions.
Synopsis
[err,Ai] = checkpi(net,Ai,Q)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code dependant on this function.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:30:12
1480 bytes
CHECKP Check P dimensions.
Synopsis
[err] = checkp(net,P,Q,TS)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code dependant on this function.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:30:14
1155 bytes
CHECKPI Check Pi dimensions.
Synopsis
[err,pi] = checkpi(net,Pi,Q)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code dependant on this function.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:30:18
1480 bytes
CHECKT Check T dimensions.
Synopsis
[err,T] = checkp(net,T,Q,TS)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code dependant on this function.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:30:06
1383 bytes
FORMATAI Format matrix Ai.
Synopsis
[err,Ai] = formatai(net,Ai,Q)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code dependant on this function.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:30:20
1124 bytes
FORMATP Format matrix P.
Synopsis
[err,P] = formatp(net,P,Q)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code dependant on this function.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:29:24
796 bytes
FORMATPI Format matrix Pi.
Synopsis
[err,Pi] = formatpi(net,Pi,Q)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code dependant on this function.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:29:56
1122 bytes
FORMATT Format matrix T.
Synopsis
[err,T] = formatt(net,T,Q)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code dependant on this function.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:29:26
1009 bytes
HADFIELD Does structure have a field.
Syntax
hasfield(S,N)
Warning!!
This function may be altered or removed in future
releases of the Neural Network Toolbox. We recommend
you do not write code which calls this function.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:29:30
500 bytes
ISBOOL True for properly sized boolean matrices.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:29:36
370 bytes
ISPOSINT True for positive integer values.
ApplicationRoot\WavixIV\neural501\@network\private
14-Apr-2002 16:29:54
272 bytes
ANY2WGS - convert coordinates to WG84 coordinate, truncate if needed
CALL:
WGS = ANY2WGS(crd_in, format)
INPUT:
crd_in: <array of float> coordinates to convert to RD
format: <string> (optional) possible values:
(default) 'RD' - RD coordinates
'E50' -
OUTPUT:
WSG: WGS84 coordinates
ModelitUtilRoot
16-Aug-2008 11:08:19
1839 bytes
ComposeDirList -
CALL:
Contents = ComposeDirList(dirlist,fields,dateformat)
INPUT:
dirlist:
fields:
dateformat:
OUTPUT:
Contents: <struct> met velden:
header - <cellstring> met kolomnamen
data - <cell array> met data
op grond waarvan een tabel gevuld kan worden
See also: jacontrol
ModelitUtilRoot
22-Feb-2008 20:04:54
2577 bytes
aggBins - deel de vector X met waarden Y op in bins (intervallen met
dezelfde waarden)
CALL:
[X, Y, widths] = aggBins(X, Y)
INPUT:
X: <double> met klassemiddens (lengte is length(Y))
of bingrenzen (lengte is length(Y) + 1)
Y: <double> met bij X horende waarden
OUTPUT:
X: <double> bingrenzen (lengte is length(Y) + 1), naast elkaar
gelegen intervallen met dezelfde waarden zijn
geaggregeerd tot 1 bin
Y: <double> waarden behorend bij X
widths: <handle> breedte van alle bins
ModelitUtilRoot
14-Oct-2007 08:54:12
1305 bytes
asciiedit - open file in ascii editor
CALL
asciiedit(fname)
INPUT
fname: te openene file
OUTPUT
geen (er word een file geopend in een ascii editor)
AANPAK
het pad naar de editor wordt gelezen uit het script notepad.bat
wanneer dit niet aanwezig script niet aanwezig is, is notepad.exe (zonder pad)
de default editor
Het script notepad.bat wordt aangemaakt met het commando
which notepad.exe > notepad.bat
ModelitUtilRoot
15-Aug-2008 18:35:14
1569 bytes
assert - check condition. If false call error(msg)
CALL
assertm(condition)
assertm(condition,msg)
INPUT
condition:
boolean
msg:
error message that will be displayed if condition==false
OUTPUT
This function returns no output arguments
EXAMPLE
assertm(exist(fname,'file'),'input file does not exist')
NOTE
assertm.m replaces assert.m because 2008a contains a duplicate
function assert
ModelitUtilRoot
06-May-2009 13:49:55
704 bytes
autolegend - (her)installeer legenda of voer callback van legenda uit
CALL:
[RESIZEDELAYED, ACTIVE] = autolegend(VISIBLE, opt)
INPUT:
opt:
opt.CLIPFRAME : do not show legend items outside this area
opt.LEGFRAME : handle of frame on which to plot legend. to be resized after
updating legend.
IF NOT EMPTY: set pixelsize for this frame
and indicate of call to mbdresize is needed
opt.PLOTAXES : handle(s) of axes to plot legend from
opt.patchprops : if specified a patch will be plotted just
inside the clipframe. This patch obscures
elements from the graph and thus prevents
mingling of the legend with the rest of
the graph.
example: struct('facec',AXESCOLOR)
opt.headerprops: if specified a header will be plotted with
the properties specified in this
structure. Note that the header will be
plotted using "text" (not
"uicontrol('style','text')")
example: struct('str',Legend,'fontw','bold')
opt.maxpixelh : limit to the pixelheight of the frame
opt.unique : if 1, show only first occurence of label
(default 0)
opt.legendbd: buttondown function
opt.NORESIZE: if true. Do not modify application data "pixelsize" of
LEGFRAME depending on legend (non default). In
some instances this behavior is not wanted
(for examples) if legends are required to
be aligned.
opt.LINEW
opt.LMARGE
opt.MIDMARGE
opt.RMARGE
opt.TMARGE
opt.VMARGE
opt.BMARGE
opt.font: <struct> with fields
INDIRECT INPUT
application data "label": this functions searches for line or patch
objects for which the application data label has been set.
Toggle sequence for these items: NORMAL-EMPHASIS-NORMAL
application data "legtext": this functions searches for line or patch
objects for which tjhe application data label has been set
Toggle sequence for these items: NORMAL-EMPHASIS-OFF-NORMAL
OUTPUT
RESIZEDELAYED: if 1: frame size has changed, mbdresize should be called to paint frames
ACTIVE: true if legend contains at least 1 element
legItems: line o
+----label
+----handles
+----hidable
+----leghandle: handle of line or patch object in legend
INDIRECT OUTPUT
This function delets and plots legend objects
This function sets the pixelsize width of FRAMEAXES
If patchprops is specified this function initiates a global invisible
axes (LAYER 3), or makes an earlier global axes the current one.
CODE EXAMPLE
% STEP 1: Install frame for legend (include this code when
% installing the GUI)
% create outer frame:
h_frame = mbdcreateframe(h_parent,...
'title','Legend',...
'tag','fixedlegendframe',...
'pixelsize',[NaN 0],... %width depends on subframe
'normsize',[0 1],... %height depends on parent frame
'lineprops',mbdlineprops,...%do not use border==1, because then lines will not be visible
'minmarges',[2 2 2 2]);
% create slider and link this to outer frame ==> changing the
% sliders value will shift the contents of the outer frame
hslid=uicontrol('style','slider');
mbdlinkobj(hslid,h_frame,...
'normpos',[1 0 0 1],...
'pixelpos',[-12 0 12 0]);
mbdlinkslider2frame(hslid,h_frame);
%note: slider claims part of the width of the outer frame.
%autolegend takes this into account by claiming extra room for the
%inner frame
%specify the inner frame. This frame may move up and down in the
%outer frame, depending on the slider position
mbdcreateframe(h_frame,...
'tag','innerlegendframe',...
'pixelsize',[0 0],...
'normsize',[0 1],...
'border',0,...
'splithor',0,...
'minmarges',[0 0 0 0]);
%Alle required frames are now installed
<other code>
%--------------------------
<other code>
% STEP 2: Install axes, plot figure and set label properties
axes('tag','MyAxes')
h=plot(1:10)
setappdata(h,'label','My Line'); %setting the label property
%tells autolegend to include the label
h=line(1:10,2:11)
setappdata(h,'legtext','My Line2'); %setting the legtext property
%tells autolegend to include the label
<other code>
%--------------------------
<other code>
% STEP 3: Update the legend
legopt=struct('LEGFRAME',gch('innerlegendframe',HWIN),...
'CLIPFRAME',gch('fixedlegendframe',HWIN),...
'PLOTAXES',[gch('MyAxes',HWIN);gch('OtherAxes',HWIN)]);
if autolegend(1,legopt)
%autolegend may change the size of innerlegendframe, depending on the displayed label sizes.
%If this is the case, mbdresize must be called to repaint all frames
mbdresize;
end
ModelitUtilRoot
13-Oct-2009 19:34:00
24304 bytes
script that closes all figures and clears all variables not used as a function in any application
ModelitUtilRoot
15-Aug-2008 12:38:38
164 bytes
cell2hashtable - converteer cellarray naar een java hashtable
CALL:
ht = cell2hashtable(c)
INPUT:
c: cellarray met twee kolommen: kolom 1: hashtable keys
kolom 2: hashtable waarden
OUTPUT:
ht: java.util.Hashtable
See also: hashtable2cell
ModelitUtilRoot
29-Jan-2008 20:05:48
542 bytes
centralpos - positioneer een window min of meer in het midden van een
scherm
CALL:
pos = centralpos(windowSize)
INPUT:
windowSize: window size in pixels
OUTPUT
pos: new centralized position for figure
EXAMPLE:
%centralize current window
centralpos(mbdpixelsize(hframe));
see also: mbdresize, mbdpixelsize, movegui(HWIN,'center');
ModelitUtilRoot
16-Aug-2008 12:03:48
599 bytes
chararray2char - convert char array to string
CALL:
str = chararray2char(str)
INPUT:
str: char array
linebreak: (optinal) string with linebreak character
default: char(10)
OUTPUT:
str: string
ModelitUtilRoot
09-Oct-2009 11:56:28
451 bytes
copystructure - kopieer inhoud van de ene naar de andere structure,
maar houd oorspronkelijke volgorde van velden vast
Indien nodig worden nieuwe velden toegevoegd
CALL:
copyto = copystructure(copyfrom,copyto)
INPUT:
copyfrom: <struct> structure with overrides
NOTE: "copyfrom" should support methods "fieldnames" and
"subsasgn". Therefore undoredo objects are allowed here.
copyto: <struct> structure with overridable data
OUTPUT:
copyto: <struct> adapted structure
ModelitUtilRoot
17-Apr-2010 13:51:56
6273 bytes
date_ax - supply a number of axes with date ticks CALL: date_ax(xa,ya) INPUT: xa: x axes handles ya: y axes handles OUTPUT: none APPROACH: It is assumed that data are specified in datenum format See also: zoomtool
ModelitUtilRoot
27-Nov-2006 13:54:34
870 bytes
datenum2java - convert Matlab datenum to Java date
CALL
jdate = datenum2java(dn)
INPUT
dn:
matlab datenumber
OUTPUT
jdate:
Equivalent Java date object
ModelitUtilRoot
15-Aug-2008 18:50:04
398 bytes
Display dateticks in eu style MOFDIFIED 18 Dec 2000 by Nanne van der Zijpp, for application in Matlab V6 Suppress warnings WYZ May 2004: use newer datetick as a base file WYZ May 2006: use newer datetick as a base file (Matlab R2006a) Look for PATCHMODELIT to find applied changes
ModelitUtilRoot
16-Aug-2008 11:35:49
18184 bytes
ModelitUtilRoot
22-Mar-2009 14:29:44
624 bytes
decomment_line - remove comment from separate line CALL: str = decomment_line(str) INPUT: str: <string> to decomment OUTPUT: str: <string> without comments See also: strtok, deblank, readComments
ModelitUtilRoot
18-Sep-2010 18:52:46
1237 bytes
defaultpath - store or retrieve default path
NOTE: this module will become obsolete. It has been replaced by
defaultpathNew.
CALL
[NewPath,Pathlist]=defaultpath(NewPath,tag)
NOTE
"defaultpath" will become obsolete. Use defaultpathNew instead. Note
that defaultpathNew requires tag as first argument.
ModelitUtilRoot
21-Feb-2010 15:04:29
1043 bytes
defaultpathNew - store or retrieve default path
NOTE: this module replaces defaultpath
CALL
Retrieve path or history:
[NewPath]=defaultpathNew(tag)
[NewPath,Pathlist]=defaultpathNew(tag)
Set path and history:
defaultpathNew(tag,NewPath)
[NewPath]=defaultpathNew(tag,NewPath)
[NewPath,Pathlist]=defaultpathNew(tag,NewPath)
INPUT
tag: integer of char string identifier (deafults to 1).
NewPath: path history
if no input, path will be retrieved from preference settings
if setting does not exist, default path = pwd/data
if directory pwd/data does not exist default path =defaultPath
defaultPath (optioneel) default: pwd
OUTPUT
NewPath: preferred path (existence has been checked)
Pathlist:
history of last 25 selected paths (existence has been checked
NOTE
The path returned by defaultpath includes the filesep sign!!
See also:
mbdparse
www.modelit.nl/modelit/matlabnotes/mbdparse-dropdown.pdf
ModelitUtilRoot
02-Jun-2010 07:24:42
6076 bytes
dprintf - shortcut for disp(sprintf(formatstr,arg1,arg2,..arg14)) CALL dsprintf(formatstr,arg1,arg2,..arg14) INPUT formatstr : format string (char array) arg1,arg2,..arg14: OUTPUT a string is displayed in the command window See also: SPRINTF, DISP, EPRINTF, DDPRINTF dprintfb
ModelitUtilRoot
16-Aug-2008 14:24:51
535 bytes
eprintf - shortcut for error(sprintf(formatstr,arg1,arg2,..arg14)) CALL: eprintf(formatstr,arg1,arg2,..arg14) INPUT: formatstr : format string (char array) arg1,arg2,..arg14: OUTPUT: a string is displayed in the command window See also: sprintf, disp, dprintf
ModelitUtilRoot
26-Feb-2008 23:11:32
403 bytes
evalCallback - execute uicontrol callback from command line or function
INPUT
CallBack: one of the following:
- string to evaluate (obsolete)
- function pointer
- cell array, first element is function pointer
hObject: handle to pass on
event: appears to be unused
varargin: arguments to pass on to function
See also: evalany
ModelitUtilRoot
06-Apr-2009 17:28:53
2053 bytes
exetimestamp_create - creeer de file exetimestamp.m
CALL
exetimestamp_create(applicname,vrs): create/update exetimestamp m-file
exetimestamp_create: create overview of exetimestamp m-files
INPUT
applicname: Name of application (optional)
NOTE: any '_' symbol will be replaced with blanks in the
screen message
vrs : M-file bundle version number (optional)
vrs is usually specified as a string to control number of
digits, or to added letter. For example 1.10a.
If not specified numerically, it will be converted to
string.
EXAMPLE
build script:
exetimestamp_create('MYPROG','1.00')
m-file:
exetimestamp_create_MYPROG;
ModelitUtilRoot
10-Mar-2010 10:11:20
6095 bytes
exist_cmp - check if file or directory exists
CALL
rc=exist_cmp(str,mode)
INPUT
str: string to look for
mode: {'file'} or 'dir'
SEE ALSO
isdirectory
EXAMPLES
exist_cmp('utils','dir')
exist_cmp('aotoexec.bat','file')
NOTE:
-1-
this version behaves like 'exist' but can be compiled
-2-
This function is now obsolete because Matlab provides a version of
exist that compiles without problems
ModelitUtilRoot
17-Aug-2008 18:04:34
1192 bytes
extensie- verify extension, append if needed
CALL
fname=extensie(fname,ext)
INPUT
fname:
candidate filename
ext :
required file extension
OUTPUT
fname:
filename including extension
See also: fileparts, putfile, getfile
ModelitUtilRoot
15-Aug-2008 12:52:35
611 bytes
findstructure - find matching elements of structure in structure array
CALL
Indx=findstructure(PatternStruct,StructArray)
INPUT
PatternStruct: structure to look for
this must be a non-empty structure
StructArray: structure array to look in
this must be a structure array that has at least the
fields of PatternStruct
flds: fields to compare (optional)
default value: intersection of fields in PatternStruct and StructArray
EXACT: if false also look for partial matches: match 'aaa' with 'aaabb'
OUTPUT
Indx: StructArray(Indx) corresponds to PatternStruct
SEE ALSO
is_in_struct
is_in
row_is_in
ModelitUtilRoot
15-Aug-2008 21:40:07
2547 bytes
gch - find uicontrol handles with specified tags
CALL:
h = gch(tag, hwin, h)
INPUT:
tag: string or cellstring with tags
hwin: (optional) handle of window to search in
default value: gcf
h: (optional) the default value
OUTPUT:
h: array with handles of uicontrol object with the specified tag
EXAMPLE:
h=0; %0 means unitialized
HWIN = figure;
if expression
%this line might or might not be reached
h=gch('mytag',HWIN);
end
h=gch('mytag',HWIN,h); %retrieve h if uninitialized
NOTE:
[] is NOT a correct way to denote an unitialize handle
See also: gchbuf, gcjh
ModelitUtilRoot
20-Apr-2009 11:34:43
1481 bytes
gcjh - find jacontrol object handles with specified tags
CALL:
h = gcjh(tag, hwin, h)
INPUT:
tag: string or cellstring with tags
hwin: (optional) handle of window to search in
default value: gcf
h: (optional) the default value
OUTPUT:
h: array with handles of jacontrol object with the specified tag
See also: gch, findjac
ModelitUtilRoot
29-Apr-2008 14:13:08
1068 bytes
getFigureClientBase - get FigureClientBase object for specified figure
CALL:
FigureClientBase = getFigureClientBase(HWIN)
INPUT:
HWIN: <handle> of figure
OUTPUT:
FigureClientBase:
<java object>
com.mathworks.hg.peer.FigureClientProxy$FigureDTClientBase
ModelitUtilRoot
17-Apr-2009 10:29:38
916 bytes
getMatlabVersion - retrieve Matlab version as numeric constant
CALL
v=getMatlabVersion
INPUT
No input arguments required
OUTPUT
v: Matlabversion.
Examples of output:
6.5018
7.0124
7.0436
7.1024
7.9052 - R2009b
ModelitUtilRoot
12-Aug-2010 12:37:55
938 bytes
getRemoteFile - get file from ftp server
CALL
getRemoteFile(obj, event, C, fname)
getRemoteFile(obj, event, C, fname, path)
getRemoteFile(obj, event, C, fname, path, SEARCHLOCAL)
getRemoteFile(obj, event, C, fname, path, SEARCHLOCAL, postProcess)
getRemoteFile(obj, event, C, fname, path, SEARCHLOCAL, postProcess,MAXVERSION)
INPUT
obj,event: not used
C:
structure with constants
+----FRAMECOLOR : background color for frame
+----TEXTPROPS : font properties for text object
+----PUSHPROPS : properties for button object
fname:
file naam zonder pad
path:
{url,username,password, path1{:}}
SEARCHLOCAL:
look for local file before downloading
postProcess: <function pointer>
postprocess function. After a succesfull download the argument
"fname" including loacal path will be passed on to this function:
postProcess(fname)
MAXVERSION: <logical>
(if true) look for all version and download highest version
See also: helpmenu
EXAMPLE
%create button HELP in toolbar
Htool = uitoolbar(HWIN);
uipushtool(Htool,'cdata',getcdata('help'),...
'separator','on',...
'tooltip','Open help file (download wanneer nodig)',...
'clicked',{@getRemoteFile,C,'jaarcontroleHelp.pdf'});
ModelitUtilRoot
20-Nov-2008 00:00:10
8638 bytes
getRoot - get root of current directory CALL: INPUT: no input required OUTPUT: root: string
ModelitUtilRoot
02-Jun-2010 15:30:32
209 bytes
getRootPane - get RootPane for specified figure
CALL:
RootPane = getRootPane(HWIN)
INPUT:
HWIN: <handle> of figure
OUTPUT:
RootPane: <java object> com.mathworks.mwswing.desk.DTRootPane
MATLAB COMPABILITY:
TEST SCRIPT: c;rp=getRootPane
TESTED WITH MATLAB VERSIONS
6.5: : NO
7.0.1.24704 (R14) Service Pack 1: YES
7.0.4.365 (R14) Service Pack 2 : YES
7.1.0.246 (R14) Service Pack 3 : YES
7.2.0.232 (R2006a) : YES
7.3.0.267 (R2006b) : YES
ModelitUtilRoot
15-Aug-2008 21:31:12
2285 bytes
get_c_default - define default colors for colors and uicontrols
CALL
C=get_c_default
INPUT
This function requires no input arguments
OUPUT
C
+----FRAMECOLOR (double array)
+----WINCOLOR (double array)
+----DEFAULTDIR
| +----WORKSPACE (double)
| +----ASCII (double)
| +----BINFILES (double)
| +----ADYFILES (double)
| +----ADY2BINFILES (double)
| +----BPSKEYFILE (double)
| +----BPSMATCHFILE (double)
| +----TSWDIR (double)
| +----BN2FILE (double)
| +----TRAJECT (double)
| +----MATFILE (double)
| +----AGGREGDAYFILES (double)
| +----NOLFILE (double)
| +----FIGFILES (double)
+----BSIZE (double)
+----FILLSIZE (double)
+----TOOLBHEIGHT (double)
+----TOOLBFRAMEHEIGHT (double)
+----LMARGE (double)
+----RMARGE (double)
+----LRMARGE (double)
+----TMARGE (double)
+----BMARGE (double)
+----VMARGE (double)
+----SMALLMARGE (double)
+----MINMARGES (double array)
+----LISTHEADER
| +----fonts (double)
| +----style (char array)
| +----fontn (char array)
| +----horiz (char array)
| +----backg (double array)
+----TEXTHEADER
| +----fonts (double)
| +----fontn (char array)
| +----horiz (char array)
| +----VerticalAlignment (char array)
| +----margin (double)
| +----units (char array)
+----EDITPROPS
| +----FontName (char array)
| +----FontSize (double)
| +----FontWeight (char array)
| +----FontUnits (char array)
| +----style (char array)
| +----backg (double array)
+----PUSHPROPS
| +----FontName (char array)
| +----FontSize (double)
| +----FontWeight (char array)
| +----FontUnits (char array)
| +----style (char array)
+----TEXTPROPS
| +----FontName (char array)
| +----FontSize (double)
| +----FontWeight (char array)
| +----FontUnits (char array)
| +----style (char array)
| +----backg (double array)
| +----horizon (char array)
+----TEXTMSGPROPS
| +----FontName (char array)
| +----FontSize (double)
| +----FontWeight (char array)
| +----FontUnits (char array)
| +----style (char array)
| +----backg (double array)
| +----horizon (char array)
+----CHECKPROPS
| +----FontName (char array)
| +----FontSize (double)
| +----FontWeight (char array)
| +----FontUnits (char array)
| +----backg (double array)
| +----style (char array)
+----POPUPPROPS
| +----FontName (char array)
| +----FontSize (double)
| +----FontWeight (char array)
| +----FontUnits (char array)
| +----style (char array)
| +----backg (char)
| +----horiz (char array)
+----LISTPROPS
| +----FontName (char array)
| +----FontSize (double)
| +----FontWeight (char array)
| +----FontUnits (char array)
| +----style (char array)
| +----fontn (char array)
| +----horiz (char array)
| +----backg (char)
+----LISTHDR
+----FontName (char array)
+----FontSize (double)
+----FontWeight (char array)
+----FontUnits (char array)
+----style (char array)
+----backg (double array)
+----horizon (char array)
+----fontn (char array)
ModelitUtilRoot
31-May-2010 14:02:20
9799 bytes
get_constants - get user configurable options and save them to file
CALL
C=get_constants(MODE,STTFILE,LANGUAGE)
INPUT
MODE: 1==> retrieve options
2==>start gui and retrieve/save options
STTFILE: name of settingsfile
LANGUAGE: dutch==> use dutch labels
uk ==> use uk english labels
OUTPUT
C: constant structure, with
GLOBALFONT: structure with fields
FontName
FontSize
FontWeight
GLOBALGRAPHCOLOR: default color for graphs
GLOBALFRAMECOLOR: default color for frames
GLOBALLISTCOLOR: default color for lists
H: regelhoogtes die zijn afgeleid van het default font
pus: (pushbutton)
tog: (togglebutton)
rad: (radiobutton)
che: (checkbutton)
edi: (editbox)
tex: (text)
pop: (popupmenu)
max: (maximum over alle uicontrol styles)
ModelitUtilRoot
20-Apr-2009 11:34:44
14104 bytes
getcdata - retrieve cdata for Matlab buttons
CALL:
getcdata([],[],fname) - load image file
CDATA=getcdata(icon) - retrieve icon (maintain transparancy)
CDATA=getcdata(icon,BG) - retrieve icon (replace transparant cells by BG)
getcdata - Regenerate image file
getcdata([],[],fname,dirname) - Regenerate image file
INPUT:
icon: icon to retrieve
BG: fill in color for transparant cells (default NaN)
fname: name of image file (cdt extension will be added automatically)
NOTE: if fname contains no path, pwd will be prepended
automatically. (WIJZ ZIJPP sep 13)
dirname: directory to read images from when regenerating image file
OUTPUT:
cdata: M x N x 3 Cdata matrix (truecolor)
transparant cells are marked with NaN numbers
rc: rc=1 if succesfull
Note: use nargout>1 to suppress warnings on the console when icon
is missing from file
EXAMPLE 1
use .ico file to set icon
The icon file contains transparancy info, however seticon can not read icon files
Remedy, save PNG file first:
%
S=getcdata('wavix16');
Transparant=isnan(S(:,:,1))|isnan(S(:,:,2))|isnan(S(:,:,3));
imwrite(S,'wavix16.png','png','alpha',uint8(~Transparant));
seticon(gcf,'myicon.png');
(Note: changing the png file only has effect after JAVA is cleared)
EXAMPLE 1
generate image file "myfile.cdt" from images in subdir "images":
getcdata([],[],'myfile','images')
ModelitUtilRoot
30-Apr-2009 11:02:42
10743 bytes
getfile - selecteer file met specifieke extensie
CALL:
[fname,pname] = getfile(ext,showstr,BATCHMODE,fname,N)
INPUT:
ext: <string> extensie die te selecteren file moet hebben
(defaultwaarde: '.m')
showstr: <string> met tekst die gebruiker te zien krijgt
(defaultwaarde: '')
BATCHMODE: <boolean> zet deze op 1 voor onderdrukken interactie
(defaultwaarde: 0)
fname: <string> default filenaam
(defaultwaarde: *.ext)
N: <integer> default categorie file t.b.v. bewaren default
directory (defaultwaarde: 1)
OUTPUT:
fname: <string> de geselecteerde filenaam
(Als cancel ingedrukt ==> fname = 0)
pname: <string> het bijbehorende pad INCLUSIEF filesep teken
EXAMPLE:
[fname,pname] = getfile('txt','Selecteer ASCII file',0,'',C.DEFAULTDIR.STUURFILES);
if ~fname
return %gebruiker heeft gecancelled
end
fname = fullfile(pname, fname);
See also: putfile
ModelitUtilRoot
15-Apr-2010 12:48:59
3485 bytes
getoptions - read an options file
CALL:
S = getoptions(fname,KEYwordlist,defaults,CaseSen)
INPUT:
fname : input file
KEYwordlist: possible fields of S
defaults : default options
CaseSen : Case Sensitivity (0/1)
OUTPUT:
S: structure with options
EXAMPLE:
-1-
Suppose the file 'optionfile' looks like:
Option1 99
%Comment line
Option2 stringvalue
Option3 123
Then the following commands:
keyw={'Option1','Option2'}
S=getoptions('optionfile',keyw)
Results in:
S.Option1=99
S.Option2='stringvalue'
(S.option3 does not exist because 'Option3' is not in keyword list)
-2- (typical use)
default=struct('option1',1,'option2',2,'option3',3,'option4',4);
S = getoptions(fname,fieldnames(default),default);
ModelitUtilRoot
16-Aug-2008 11:25:27
4370 bytes
getproperty - return matching char string from cell array of keywords
CALL:
prop = getproperty(property,ValidProps)
INPUT:
property - char string with property. This string contains the
first letters of the keyword searched for. The matching
is Case-Insensitive.
ValidProps - cell array with valid property values
OUTPUT:
prop - string with property that matches ValidProps
EXAMPLE:
getproperty('my',{'MySpecialProperty'}) returns 'MySpecialProperty'
ModelitUtilRoot
22-Jun-2009 11:55:44
1849 bytes
getuicpos - haal de extent van een object op inclusief randen van een frame
CALL
ext=getuicpos(h)
INPUT
h: (scalar) handle van uicontrol object
OUTPUT
ext: =[ext(1) ext(2) ext(3) ext(4)];
[ext(3) ext(4)] = afmetingen (extent) van het object + extra ruimte
aanname: de units van het object zijn in pixels
ModelitUtilRoot
24-Jun-2010 16:28:15
2936 bytes
getyear - convert 2 digit year date to 4 digits
CALL
yr=getyear(yr,VERBOSE)
INPUT
yr : 2 digit year (4 digits are allowed)
VERBOSE: display warning when making interpretation
OUTPUT
yr: interpreted year
ModelitUtilRoot
11-Jun-2010 19:17:34
988 bytes
hashtable2cell - converteer java hashtable naar een cellarray
CALL:
c = hashtable2cell(ht)
INPUT:
ht: java.util.Hashtable
OUTPUT:
c: cellarray met twee kolommen: kolom 1: hashtable keys
kolom 2: hashtable waarden
See also: cell2hashtable
ModelitUtilRoot
13-Feb-2008 17:08:38
1008 bytes
height - get matrix height
CALL
w=height(str)
INPUT
str: matrix
OUTPUT
w: matrix height
SEE ALSO: size, length, width
ModelitUtilRoot
15-Aug-2008 14:47:56
220 bytes
htmlWindow - maak een scherm aan waarin html code weergegeven kan worden CALL: HWIN = htmlWindow(title, text) INPUT: title: string, titel van het scherm text: string, weer te geven text, eventueel in HTML OUTPUT: HWIN: handle
ModelitUtilRoot
10-Mar-2010 10:20:38
2616 bytes
Install files to compiled directory
INPUT
packageNames: cell array containing package names
dirName: relative or absolute path to compiled directory
OUTPUT
files copied to compiled directory
EXAMPLE
installPackage({'modelit','xml','googlemaps'},'exe14');
ModelitUtilRoot
23-Nov-2009 16:55:34
3700 bytes
installjar - Create a classpath.txt file.
SUMMARY:
Modelit provides a number of functions that require the static path
to be set. The static path is set at startup and read from the file
classpath.txt. Installjar writes this file and should be run whenever
the software is installed in a new location.
installjar readme:
- Some Modelit applications rely on one or more Java-Archive's (JAR files)
created by Modelit.
- installjar.exe is a utility that installs these JAR files.
- Usually, an install script provided by modelit takes care of this.
These notes provide extra information.
- installjar.exe must be run before the Modelit application is started
- It is not necessary to run installjar.exe more then once, unless the
application files are moved to a new directory
- The installjar utility requires at least the following file structure
<This directory>
+----installjar.exe (file)
+----installjar.ctf (file)
+----java (directory)
+----modelit.jar (file)
+----<any other jar file>
CALL:
terminateApplication=installjar(ALWAYS,jarNames)
INPUT
ALWAYS: (Optional) If true: always set the class path. If false
only set the class path if needed.
jarNames: (Optional) Cell array that contains the names of required
jar files. When omitted all files in jar directory are
installed
INTERPRETED MODE
if no arguments are specified all jar files in the utility
directory mbdutils\java are added to the static
javaclasspath.
Any specified jar files should be located in the directory
"...\mbdutils\java" when ran in on Matlab path
COMPILED MODE
if no arguments are specified all jar files in the
directory pwd\java are added to the static javaclasspath.
Any specified files should be located in directory "pwd\java"
OUTPUT
terminateApplication:
if true: terminate application (compiled mode) or terminate
Matlab session (compiled mode)
ModelitUtilRoot
25-Aug-2009 14:01:52
6371 bytes
is_eq - verify if argument pairs are equal
CALL
equal=is_eq(arg1,arg2,arg3,arg4,...)
INPUT
arg1,arg2: first argument pair
arg3,arg4: second argument pair
...: etc
OUTPUT
equal: 1 if all argument pairs have corresponding size and are equal
NOTE
function is verbose if no output arguments are required
EXAMPLE (non verbose mode)
>> a=is_eq(1,0)
a =
0
EXAMPLE (verbose mode)
>> is_eq(1,0)
Arg 1,2: Not equal, Max Abs Diff = 1.000000
ModelitUtilRoot
02-Aug-2010 10:56:20
6084 bytes
is_in - vectorized version of 'find'
CALL
function elm=is_in(g,h,G,H)
INPUT
g: vector with elements to be located
h: vector in which elements are looked for
H: sorted version of h (saves time)
Hindx: outcome of [hsort hidx]=sort(h);
OUTPUT
elm: returns indices >1 for each element of g which corresponds to elements of h
returned value corresponds with FIRST occurance in h
EXAMPLE
[H,Hindx]=sort(h);
for ...
elm=is_in(g,[],H,Hindx)
..
end
EXAMPLE (2):copy elements from other table using key
[f,ok]=is_in(key1,key2)
attrib1(ok)=attrib2(f(ok))
NOTE
In some cases "unique" is more efficient. Example:
INEFFICIENT CODE:
u_nrs=unique(nrs);
indx=is_in(nrs,u_nrss);
EFFICIENT CODE:
[u_nrs,dummy,indx]=unique(nrs);
See also:
ismember (Matlab native)
is_in (deals with vectors)
is_in_id (return matched IDs instead of indices)
is_in_find (shell around is_in that returns, first, second, etc match)
is_in_sort (deals with sorted vectors)
row_is_in (deals with rows of a matrix)
is_in_struct (deals with structures)
is_in_eq (deals with equidistant time series)
ModelitUtilRoot
18-Jun-2009 12:32:31
5597 bytes
is_in_eq - equivalent to is_in but designed for equidistant time series
CALL
function g2h=is_in(g,h)
INPUT
g: vector with equidistant time series
h: vector with equidistant time series
NOTE!!: g and h must have equal stepsizes
OUTPUT
g2h: returns indices >1 for each element of g which corresponds to elements of h
returned value corresponds with FIRST occurance in h
SEE ALSO
is_in (deals with vectors)
row_is_in (deals with rows of a matrix)
is_in_struct (deals with structures)
is_in_eq (deals with equidistant time series)
is_in_sort (deals with sorted time series)
findstructure (find a structure in a structure array)
ModelitUtilRoot
22-Sep-2004 10:16:21
2277 bytes
is_in_struct - find matching elements of structure in structure array
CALL
result=is_in_struct(PatternStruct,StructArray)
INPUT
PatternStruct: structure to look for
this must be a non-empty structure
StructArray: structure array to look in
this must be a structure array that has at least the
fields of PatternStruct
flds: fields to compare (optional)
default value: intersection of fields in PatternStruct and StructArray
OUTPUT
result:
result(k)=0 ==> no matching structure in StructArray for PatternStruct(k)
result(k)>0 ==> patternStruct(k) is matched by StructArray(result(k))
NOTE
this function has not been optimized for speed. Consider table_ismember
for data intensive queries
SEE ALSO
is_in (deals with vectors)
row_is_in (deals with rows of a matrix)
is_in_struct (deals with structures)
is_in_eq (deals with equidistant time series)
is_in_sort (deals with sorted time series)
table_ismember (deals with table structure)
findstructure (find a structure in a structure array)
ModelitUtilRoot
19-Jun-2009 13:48:59
4991 bytes
istable - check if S can be considered as a table structure
CALL
[ok,N,emsg]=istable(S)
INPUT
S: (candidate) table structure
OUTPUT
ok : true if S is table structure; false otherwise
N : height of table
emsg: <string> extra information if ok is false
TABLE DEFINITION (added by ZIJPP 2001225)
A table structure is a structure that meets the following conventions.
- A table structure is a single structure with 0, 1 or more columns
- If a table contains more than 1 column, all columns must be equal height
- A column may be one ofd the following
- a numeric or char array, including:
- empty arrays ([0xW numeric or char])
- vectors of NILL elements ([Hx0 numeric or char])
- The preferred way to initialize an empty table structure is:
T=struct ==> T= 1x1 struct array with no fields
- By convention an empty scalar array or an empty struct array may be
used to initialize an empty table structure:
T=[] or T = struct([])
KNOWN ISSUES
tableselect removes fieldnames if all rows of a table are removed
EXAMPLE
if ~istable(S)
error('Assertion failed: variable is not a table structure');
end
ModelitUtilRoot
25-Dec-2009 16:39:03
2369 bytes
Get handle of java object for Matlab object
CALL
h_Java=javahandle(h)
INPUT
h: Matlab hg handle (uitoolbar or figure)
OUTPUT
h_Java: Java handle
EXAMPLES
htool=uitoolbar;
jh=javahandle(htool);
jh.addGap(1000); %diveide left and right cluster
jh.addSeparator; %add Separator
jh=javahandle(gcf);
get(jh); %show current properties
methods(jh); %see what you can do with this window
ModelitUtilRoot
20-Apr-2009 11:34:47
2271 bytes
Compileerbare vorm van 'load'
CALL
[var1,var2,... ]= load_cmp(fname,varname1,varname2,...)
INPUT
fname: naam van de mat file INCLUSIEF extensie
varname1, varname2: naam van de variabelen in de file
OUTPUT
aanroep is equivalent aan:
load(fname)
var1=varname1
var2=varname2
..
SEE ALSO:
load_var (requires Matlab V7)
ModelitUtilRoot
21-Nov-2005 12:00:15
3013 bytes
loadnnpackage - refereer naar Neural Netwerk functies die binnen Wavix
worden gebruikt, zodat Wavix is te compileren
CALL:
loadnnpackage
INPUT:
geen invoer
OUTPUT:
geen directe uitvoer, Wavix kan nu gecompileerd worden inclusief
de neurale netwerk functionaliteiten
ModelitUtilRoot
25-Oct-2006 18:22:15
5519 bytes
ModelitUtilRoot
17-Mar-2008 18:08:03
20480 bytes
mbd_restore - herstel interactieve status van GUI
interface.
CALL
mbd_restore(uistruct)
INPUT
uistates: structure array = output van UISUSPEND (input voor UIRESUME)
figureHandle : figure handle
figsettings : figure attributes
children : handles of children
childsettings : struct array with settings
uic_children : handle of uimenu, uicontrol and uicontext children
uic_childsettings: struct array with settings
OUTPUT
geen
OUTPUT NAAR SCHERM
Herstel de interactieve status van GUI
interface.
APPROACH
Herstel de interactieve status van GUI
interface met behulp van de functie UIRESTORE en het veld
uistruct.uistates.
Herstel de 'enable' status van objecten uistruct.objhandles met
behulp van het veld uistruct.uienable.
See also: MBD_SUSPEND UIRESTORE UISUSPEND
ModelitUtilRoot
15-Aug-2008 13:52:41
2500 bytes
mbd_suspend - schort interactieve status van GUI
interface op.
CALL
uistates=mbd_suspend
INPUT
geen
OUTPUT
uistates: structure array = output van UISUSPEND (input voor UIRESUME)
figureHandle : figure handle
figsettings : figure attributes
children : handles of children
childsettings : struct array with settings
uic_children : handle of uimenu, uicontrol and uicontext children
uic_childsettings: struct array with settings
OUTPUT NAAR SCHERM
schort interactieve status van GUI
interface op.
EXAMPLE:
try
%- Zet interactieve eigenschappen van applicatie uit (mbd_suspend)
uistate=mbd_suspend;
<ACTIE>
catch
%- Actveer de interface (mbd_restore)
mbd_restore(uistate);
error(lasterr);
end
%- Actveer de interface (mbd_restore)
mbd_restore(uistate);
ModelitUtilRoot
23-Jan-2006 01:28:46
5408 bytes
mbdlabel - create an interactive text label
CALL
mbdlabel(h,str,Options)
INPUT
h : handle of object that gets an extra buttondown function
str : pop up message
Options: parameter value pairs
permitted values
show: on {off}
on: activate after button press
button: activate now
buttond: {on} off
on: install buttondown activation
off: do not install buttondown activation
mode: {arrow} text box
arrow : show arrow and text
text : show plain text
box : show label in box
NOTE
This function can also be used to set pone label on multiple objects,
These object then form a virtual object from the viewpoint of label
setting
EXAMPLES
show label in box, popup now, do not set interactive props
mbdlabel(gco,label,'mode','box','show','on','buttond','off');
hide label
mbdlabel(gco,'');
ModelitUtilRoot
15-Aug-2008 14:48:17
7288 bytes
mbdparse - parse user input
CALL
mbddisplay(h,val);
display argument "val" in object "h"
[val,ok]=mbdparse
[val,ok]=mbdparse(h)
retrieve argument "val" from object "h"
check validity of input
INPUT
h: uicontrol or jacontrol handle
value: value to display in field (typically used at installation)
INDIRECT INPUT
opt=getappdata(h,'opt')
application data for this object
SUPPORTED OPTIONS:
opt.dealwith : function that replaces mbdparse entirely
DEFAULTVALUE: opt.dealwith ='';
opt.format : format string for reading
(not required if 'dealwith' or 'type' specified)
NOTE: do not use '%.f' or similar for reading, use '%f' instead
if formatting is required specify opt.displayformat='%.2f'
DEFAULTVALUE: opt.format ='';
opt.type : type of field
(not required if 'dealwith' or 'format' specified)
int : integers
double : doubles
str : str
url : url (e.g. http://www.modelit.nl)
filename : str (opt.filter)
directory: str
date : date dd/mm/yyyy mm/yyyy or yyyy
ddmmm : date dd/mm/yyyy dd/mmm or mmm
time : HH:MM
DEFAULTVALUE: opt.type ='';
opt.multiple : allow entering multiple values separated by ; or space (default 0)
works for type = int,double not tested for
other types
opt.required : forced field (empty not allowed)
DEFAULTVALUE: opt.required =0;
opt.emptywarn: warning to be provided when emptystring is
encountered
DEFAULTVALUE: opt.emptywarn ='Empty input not allowed for this field';
opt.emptystr : string that is displayed when field is empty
DEFAULTVALUE: opt.emptystr ='';
opt.filter : filter applicable when opt.type == filename
DEFAULTVALUE: opt.filter ='*';
opt.prefdir : tag to be passed to defaultpath
DEFAULTVALUE: opt.prefdir =1001;
opt.exist : if 1 (or not specified) check existence of file or directory
DEFAULTVALUE: opt.exist =1;
opt.minimum : minimum value (== is allowed)
DEFAULTVALUE: opt.minimum = -inf;
opt.minstr : message if value too low
DEFAULTVALUE: opt.minstr ='Value too low';
opt.maximum : maximum value (== is allowed)
DEFAULTVALUE: opt.maximum = inf;
opt.maxstr : message if value too high
DEFAULTVALUE: opt.maxstr ='Value too high';
opt.oldvalue : previous value (to be restored if new value is incorrect)
DEFAULTVALUE: opt.oldvalue =[];
opt.displayformat : format string for displaying
DEFAULTVALUE: opt.displayformat='';
opt.compact: works for type=filename
opt.settooltip: copy string into tooltip (for display of
long strings in small fields)
DEFAULTVALUE: opt.settooltip=0;
opt.codeword: (Only applicable if opt.type==filename)
accept this codewords even if they do not
match with filename
DEFAULTVALUE: {} (empty cell array)
EXAMPLES: opt.codeword='<NO SELECTION>'
opt.codeword={'<NO SELECTION>','<ALL FILES>'}
opt.parent: get options from specified parent
OUTPUT
val: value entered by user
ok : ok==1 if value is succesfully entered
EXAMPLE
figure
h=uicontrol('style','edit','str','20','callb',{@mbdparse,1});
opt=struct('type','int',...
'minimum',0,...
'minstr','value to low',...
'maximum',100,...
'maxstr','value to high',...
'compact',0,...
'oldvalue',50,...
'required',1,...
'feedback',1);
setappdata(h,'opt',opt);
mbdparse(h)
See also:
val=mbdparsevalue
www.modelit.nl/modelit/matlabnotes/mbdparse-dropdown.pdf
ModelitUtilRoot
12-Sep-2010 19:08:12
30885 bytes
mbdparsevalue - converteer ingevoerde data in edit field
CALL
val=mbdparsevalue(h)
INPUT
h: handle van object
opt.oldvalue : previous value (to be restored if new value is incorrect)
OUTPUT
val: value entered by user
SEE ALSO: mbdparse
ModelitUtilRoot
15-Aug-2008 14:05:56
371 bytes
mexprint - mex version of print
USE
shield print command from mcc -x -h command
1. compile mexprint.m mcc -x mexprint (produces mexprint.dll)
2. replace print with mexprint in all m files
3. compile application with mcc -x -h application mexprint.dll
ModelitUtilRoot
03-Jul-2003 11:50:20
339 bytes
movegui_align - similar to MOVEGUI but position figure relative to other
figure instead of position on screen. Treat other window as
scree area
CALL
movegui_align(fig,hrelfig,position)
INPUT
fig: Handle of figure that is to be moved
hrelfig: Position relative to this object. Possible values
Figure handle
"pointer" position relative to pointer
position: way of positioning
The POSITION argument can be any one of the strings:
'north' - top center edge of screen
'south' - bottom center edge of screen
'east' - right center edge of screen
'west' - left center edge of screen
'northeast' - top right corner of screen
'northwest' - top left corner of screen
'southeast' - bottom right corner of screen
'southwest' - bottom left corner of screen
'center' - center of screen
'onscreen' - nearest onscreen location to current position.
'pointer' - nearest onscreen location to current position.
EXAMPLE:
movegui_align(gcf,'pointer','northwest'),movegui(gcf,'onscreen');
See also: movegui
ModelitUtilRoot
16-Aug-2008 10:07:18
4580 bytes
msg_temp - display message that goes away after a few second INPUT/OUTPUT: see warndlg
ModelitUtilRoot
27-Nov-2008 10:16:22
1200 bytes
multiwaitbar - plot one or more waitbars in a unique figure
CALL:
HWIN = multiwaitbar(bartag,x,wbtext,varargin)
HWIN = multiwaitbar(bartag,x,wbtext,'stopb','abort',varargin)
HWIN = multiwaitbar(bartag,x,wbtext,'suspb','abort',varargin)
HWIN = multiwaitbar(bartag,x)
HWIN = multiwaitbar(bartag,-1)
INPUT:
bartag: <string> signature of waitbar
x: <double> progress (in % ==> range = 0-100)
Note: set x to NaN for indefinite waitbar
wbtext: <string> text above waitbar
Note: the space reserve for this text is determined
at startup of waitbar
varargin: <varargin{:}> properties to be passed on to the "figure" command (has no
effect when figure already exists)
SPECIAL KEYWORDS:
- " 'stepsize',5" changes default stepsize to 5. Aall
calls will be ignored, unless
rem(x,5)==0.
- " 'stopb','abort' " adds a stopbutton with text 'abort'
- " 'suspb','abort' " adds a suspend button with text
'abort' this works together with
function "stopwaitbar"
NOTE: the arguments passed in "varargin" are only used when
the waitbar is created. In other words: these arguments can
not be used to change an existing waitbar figure.
OUTPUT:
HWIN: <handle> of the figure with the waitbar(s)
SHORT EXAMPLE: multiwaitbar('uniqueTag',10,'10%','name','example')
EXAMPLE:
hwait=multiwaitbar('loop1',0,'','name','Check');
for k=1:10
multiwaitbar('loop1',10*k,sprintf('k=%d',k));
for r=1:5:100
if stopwaitbar(hwait),return;end
multiwaitbar('loop2',r,sprintf('r=%d',r));
pause(0.01)
end
end
multiwaitbar('loop1',-1);
multiwaitbar('loop2',-1);
See also:
stopwaitbar
closewaitbar
ModelitUtilRoot
05-May-2010 14:58:35
12871 bytes
name - set title of current figure to specified value
CALL
name(nme)
INPUT
nme: name of figure
OUTPUT
This function returns no output arguments
ModelitUtilRoot
15-Aug-2008 14:36:25
239 bytes
offon - replace 0 with 'off' and 1 with 'on'
CALL:
val = offon(val)
INPUT:
val: 0,1 or character string
OUTPUT:
val: string, possible values: 'off' or 'on'
ModelitUtilRoot
13-Apr-2009 12:04:26
362 bytes
patchvalue - callback for interactive patch labels
CALL
patchvalue(obj,event,varargin)
INPUT
obj : object that is clicked on
event : not used
varargin: <attribute,value> pairs
Table of valid attributes
ATTRIBUTE DEFAULT ACCEPTED VALUES
zdata [] numeric, char or struct arrays The
size should correspond to xdata,
meaning that if xdata is MxN
zdata can either be MxN, 1xN or 1x1
(if numeric) or zdata can be NxP or 1xP (if
chararchter) or zdata can be
If zdata is a structure array, a parse
function must be specified
textoptions arial,bold valid text options
NOTE: These options will be passed on to text object
labeltag GRIDLABEL tag attached to labels
NOTE: this tag is needed to remove the
object (overides textoptions
property)
format %.0f format string for plotting numeric values
NOTE: this property is used when
datatype is double
datatype double date or double
NOTE: use this option to display date
labels
parsefunction [] any function pointer %function used to parse results
NOTE: use this option none of the above
works. The function will be called
with one argument (the selected
zvalue)
selectmode first first or all
NOTE: when this option is selected the
search for a valid patch will stop as
soon as one is found. This speeds up
the proces but may not be what you
want if patches overlap
labellocation center center or pointer
NOTE: by default labels are plotted in the
center of each patch. alternatively they may
be plotted at the point where the user
clicks
OUTPUT
This function returns no output arguments
EXAMPLE
h_patch=patch(X,Y,Z,'facec','r','buttond',@patchvalue);
setappdata(h_patch,'datatype','date'); (optional defaults to "double")
setappdata(h_patch,'zdata',sqrt(Z)); (optional defaults to zdata from patch)
h_patch=patch(X,Y,Z,'buttond',{@patchvalue,'%.0f','center','SWANLABEL'},'parent',h_kaart);
...
delete(findobj('tag','SWANLABEL'); %this removes the labels for this patch
while leaving other labels intact
ModelitUtilRoot
15-Aug-2008 16:53:44
8110 bytes
pathcomplete - extend filename with path
CALL
pnamefname=pathcomplete(CurrentPath,fname)
IMPUT
fname : filename (possibly includes path)
OUTPUT
CurrentPath
pnamefname
ModelitUtilRoot
28-Apr-2003 14:21:09
427 bytes
pcolorBar - plot vector as a collection of colored boxes
CALL:
h = pcolorPlot(X, Y, varargin)
INPUT:
X: vector met xdata
Y: matrix met data to plot against xdata
varargin: <parameter-value pairs)
'xax' - <vector> indicating edges of boxes, length is
size(data,2) + 1
'yticks' - <cellstring> specifies the yticks, length is
size(data,1);
OUTPUT:
h: <matrix> with patchhandles
See also: pcolor
ModelitUtilRoot
22-Nov-2007 04:13:12
1858 bytes
pcolorPlot - plot matrix as a collection of colored boxes
CALL:
pcolorPlot(X, Y, varargin)
INPUT:
X: vector met xdata
Y: matrix met data to plot against xdata
varargin: <parameter-value pairs)
'xax' - <vector> indicating edges of boxes, length is
size(data,2) + 1
'yticks' - <cellstring> specifies the yticks, length is
size(data,1);
OUTPUT:
geen uitvoer
See also: pcolor, pcolorBar
ModelitUtilRoot
27-Mar-2008 11:09:32
2164 bytes
points2pixels - short hand for converting points to pixels
CALL
pixels = points2pixels(points)
INPUT
points: <double>
position in points
OUTPUT
points: <double>
position in pixels
ModelitUtilRoot
16-Aug-2008 12:15:59
420 bytes
postcode2pos - return WGS position from Dutch Zip code
CALL
[pos,adress,msg]=postcode2pos(PC)
INPUT
PC : 4 digit or 6 digit dutch Zip code
OUTPUT
pos : [Longitude, Latitude] !!! note that Longitude comes first !!
adress : PC, City (State) Country
msg : error message if applicable (empty==> no error)
EXAMPLE
Verification:
Rotterdam is a city in Zuid-Holland at latitude 51.923, longitude 4.478.
postcode2pos('3042as')==> pos= [4.4294 51.9344]
ModelitUtilRoot
19-Oct-2009 23:42:00
8274 bytes
print2file - start GUI for exporting graph
CALL:
HWIN = print2file
HWIN = print2file(hfig)
HWIN = print2file(obj, event, hfig, varargin)
INPUT:
obj,event: standard Matlab callback arguments
hfig: handle of figure for which to create plot
varargin: property value pairs. Accepted property names:
PROPERTY {DEFAULT}
language {'dutch'} 'english';
constants {[]};
visible {true} false;
OUTPUT:
HWIN: handle of GUI figure
EXAMPLE:
uimenu(hFile,'label','Print figure','callback',@print2file);
See also: print2file_Execute
ModelitUtilRoot
10-Mar-2010 16:20:52
32709 bytes
pshape - provide cursor for used in zoomtool
CALL
shape=pshape
INPUT
1 input argument allowed, no input arguments used
OUTPUT
pshape
16*16 array that can be used as cursor
EXAMPLE
set(gcf,'pointershapecdata',pshape);
See also: zoomtool
ModelitUtilRoot
15-Aug-2008 11:19:04
1592 bytes
putfile - return file with specific extension from default directory
CALL
[fname,pname]=putfile(ext,showstr,BATCHMODE,fname,tag)
INPUT
ext : extension of file to be selected
(defaultwaarde: '.m')
showstr : Text above figure
(defaultwaarde: '')
BATCHMODE : if true: suppress any interaction
(defaults to 0)
fname : default filename
(defaults to *.ext)
tag : tag for category of file. See defaultpathNew. Can be
integer or string.
(defaults to 1)
OUTPUT
fname : de geselecteerde filenaam, of 0
LET OP!: fname==0 en niet fname='' staat voor cancel!!
pname : het bijbehorende pad
USER INPUT
gebruiker selecteert filenaam
EXAMPLE (1)
[fname,pname]=putfile('txt','Save ASCII file',0,'MyFile','AsciiDump');
if ~fname
return
end
fname=[pname fname];
..
See also: UIPUTFILE PUTFILE
ModelitUtilRoot
02-Jun-2010 15:38:26
3374 bytes
select range than execute Commanstr
CALL
rbline('hor')
rbline(1,CommandStr)
INPUT
arg1: mode of operation
CommandStr: cell array:
{FuncPonter, arg1,arg2,...}
OUTPUT
no direct output: Function Commandstr is called with srange(1) and xrange(2)
ModelitUtilRoot
16-Mar-2010 13:55:00
3325 bytes
rbline2 - select range than execute Commanstr
CALL
rbline2('hor')
rbline2(1,CommandStr)
INPUT
attribute value pairs:
ATTRIBUTE DEFAULT IMPACT
axes gca axes to display in
callback none function to call when interval has been selected
this function will be called in the following way:
fpointer(obj,event,x1,x2)
REMARK
cursor position is retreived from current axes
OUTPUT
no direct output: Function Commandstr is called with srange(1) and xrange(2)
ModelitUtilRoot
15-Aug-2008 16:18:20
6317 bytes
readComments - similar to help but returns a cell array with help CALL: C = readComments(filename, comment) INPUT: filename: <string> comment: <character>, default value: '%' OUTPUT: C: <cell array> See also: help
ModelitUtilRoot
25-Aug-2009 17:58:32
997 bytes
readcell - lees character array weg van file
CALL:
strs = readcell(fname, n)
INPUT:
fname: in te lezen file
n: aantal te lezen regels
OUTPUT:
str:
See also: writestr, readstr
ModelitUtilRoot
27-Mar-2008 14:10:26
549 bytes
readstr - lees character array weg van file
CALL:
readstr(fname, n, decomment)
INPUT:
fname: in te lezen file
n: aantal te lezen regels
decomment: if true do not return commented lines
OUTPUT:
str: string met bestandsinhoud
See also: writestr, readcell
ModelitUtilRoot
24-Jan-2008 10:06:36
1959 bytes
real2str - print real data in columns
SUMMARY
real2str is equivalent to num2str but much faster
CALL
[str,W]=real2str(X,N)
INPUT
X: vector or matrix
N: number of digits after comma
OUTPUT
str: output string
W: Width of each column
See also: vec2str
ModelitUtilRoot
27-Feb-2009 12:42:13
2565 bytes
rightalign - make right aligned header of exactly N positions
CALL
RA_header=rightalign(LA_header,N)
INPUT
LA_header : Left aligned string NOTE!! vectorized input is supported
N : Number of digits required
OUTPUT
RA-header : Right aligned header
EXAMPLE
[str,N]=real2str(1000*rand(5,2));
la_hdr=strvcat('col1','col2');
hdr=rightalign(la_hdr,N);
disp(hdr);
disp(str);
SEE ALSO
leftalign
ModelitUtilRoot
02-Nov-2004 18:57:33
1212 bytes
rmfiles - remove files/directories (for safety the full path must be
specified)
CALL:
rmfiles(files)
INPUT:
files: <cellstr> with filenames
<chararray> with filenames
OUTPUT:
none, the specified files/directories are deleted
See also: rmdir, delete
ModelitUtilRoot
27-Sep-2006 09:53:24
1659 bytes
row_is_in - recognize rows of matrix A in matrix B (and vice versa)
CALL:
[A2B,B2A]=row_is_in(A,B,Aunique)
INPUT:
A,B: matrices
Aunique: set this argument to 1 if A consists of unique rows
OUTPUT:
A2B: vector with length= size(A,1)
if A2B(i)~=0: A(i,:) == B(A2B(i,:))
B2A: vector with length= size(B,1)
if B2A(i)~=0: B(i,:) == A(B2A(i,:))
REMARKS
returns indices >1 for each element of A which corresponds to elements of B
returned value corresponds with FIRST occurrance in B
NOTE
In some cases "unique" is more efficient. Example:
INEFFICIENT CODE:
u_rows=unique(strs,'rows');
indx=row_is_in(strs,u_rows);
EFFICIENT CODE:
[u_rows,dummy,indx]=unique(strs,'rows');
See also
is_in (deals with vectors)
row_is_in (deals with rows of a matrix)
is_in_struct (deals with structures)
is_in_eq (deals with equidistant time series)
is_in_sort (deals with sorted time series)
strCompare
unique
ModelitUtilRoot
18-Jun-2009 11:25:17
4852 bytes
runlength - determine the runlength of values in a vector CALL: [len val] = runlength(x) INPUT: x: <vector of double> OUTPUT: len: <vector of integer> number of consecutive repetitions of value val: <vector of double> value See also: invrunlength
ModelitUtilRoot
20-Oct-2007 12:04:06
427 bytes
selectdate - select date by clicking on calender
CALL
[date,rc]=selectdate(datenr)
[date,rc]=selectdate
INPUT
datenr: initial date valaue (defaults to today)
OUTPUT
datenr: selected date value (empty if cancel)
SEE ALSO
dateselector
ModelitUtilRoot
11-Oct-2005 09:40:50
3735 bytes
selectdir - Open the directory selector and wait for user reply
CALL:
pth = selectdir(obj,event,curdir,N)
pth = selectdir(obj,event,curdir)
pth = selectdir(obj,event)
INPUT:
obj,event: not used
curdir: (string) initial directory. If curdir is not a valid directory, pwd
will be assumed
N: (integer) directoryTypeIdentifier. This number will be used to retreive and
store the history of selected directories
OUTPUT:
pth: -1- if not empty: the name of a directory of which its existence has
been verified.
-2- NOTE: pth includes the "\" sign at the end
-3- empty if user has cancelled.
-4- When a directory is succesfully selected, selectdir issues a
call to defaultpath using directoryTypeIdentifier N. The
next time the directory selector opens this directory is
presented as one of the alternatives.
SEE ALSO: defaultpath
ModelitUtilRoot
21-Feb-2010 15:40:15
8315 bytes
setMouseWheel - set callback for mouseWheel for figure
CALL
setMouseWheel(fcn)
setMouseWheel(fcn,HWIN)
INPUT
fcn: calbback function
HWIN: handle of figure
Nanne van der Zijpp
Modelit
www.modelit.nl
ModelitUtilRoot
15-Aug-2008 21:30:04
3569 bytes
setPassive - communicate with server in "passive" mode, even if this is
not the server's default. Some DSL modums do not support the active ftp
mode
CALL
setPassive(ftpobj)
INPUT
ftpobj <class ftp>:
ftp connection
OUTPUT
This function returns no output arguments
ModelitUtilRoot
16-Aug-2008 11:10:26
472 bytes
setProxy - stel proxy settings in
CALL:
setProxy(proxyadres, proxypoort)
INPUT:
proxyadres: <string> adres van de proxyserver
proxypoort: <integer> poortnummer
OUTPUT:
geen uitvoer
APPROACH:
als proxyadres en proxyadres leeg zijn wordt de proxy uitgeschakeld
See also: urlread, urlwrite, getProxy
ModelitUtilRoot
13-Jul-2007 10:51:30
725 bytes
seticon - change the icon of a matlab figure
CALL
seticon(HWIN,iconfile):
set icon of HWIN to data in file "iconfile". Supported file
formats: PNG, GIF, BMP,ICO (only GIF and PNG support transparancy
data). This is the most common way to call seticon.
seticon(HWIN):
set icon of HWIN to previously used value
seticon:
set icon of current window to previously used value
seticon(HWIN,cdata):
set icon of HWIN to cdata.
Note: cdata is not "remembered" so a next call to seticon without
arguments will NOT reproduce this icon
seticon(HWIN,X,MAP):
set icon of HWIN to [X,map]
Note: [X,map] is not "remembered" so a next call to seticon
without arguments will NOT reproduce this icon
seticon(0)
Reset persistent icon (do not update figure)
INPUT
HWIN:
figure to operate on
iconfile:
file to read icon from
cdata:
truecolor image
[X,map]:
indexed image
OUTPUT
This function returns no output arguments
NOTES
On windows, best results are obtained when using bitmaps of 16 x 16 pixels
When transparant icons are required, use a GIF or PNG format
This version has been tested with Matlab 7.0, 7.01 and 7.04 and the
corresponding Matlab Compilers (see Details below)
LIMITATIONS
In Matlab version 7.04, seticon has no effect on figures for which the
attribute 'visibility' is set to 'off'. It is expected that this problem
can be solved in a later version. In earlier Matlab versions this problem does not occur.
To obtain optimal results across versions one may invoke seticon twice, see
example below.
NOTE August 5 2005: the problems seems to be solved by introducing a
timer that tries untill the window becomes
visible
EXAMPLE(1): set icon on 1 window
HWIN=figure('vis','off') %hide window while construction is in progress
seticon(HWIN,'modelit.png'); % (typical for Matlab v7.0/v7.01)
<create graphs and uicontrols>
set(HWIN,'vis','on');
drawnow; %<< (only required for Matlab v7.04)
seticon(HWIN); %<< (only required for Matlab v7.04)
EXAMPLE(2): set icon on each future window
set(0,'DefaultFigureCreateFcn',@modelitIcon)
function modelitIcon(obj,event)
seticon(obj,'modelit.png');
COMPATIBILTY NOTE
The behaviour of seticon may change when new Java or Matlab versions
are installed. The Seticon utility relies on some undocumented Matlab
features that may change or disappear in future Matlab versions. It
is expected that seticon can be adapted to such changes. However no
guarantees of whatever kind are given that a seticon version will be
available for Matlab versions beyond 7.04.
See also: imread, icon2png
ModelitUtilRoot
17-Apr-2009 10:33:16
10715 bytes
shiftup(hfig) - move window with multiple of its size
SUMMARY
This function is typically used to place a second waibar directly
above the first one. Note that nowadays multiwaitbar is available for
displaying stacked waitbars.
CALL:
shiftup(hfig,direction)
INPUT:
hfig: figure handle
direction: [vertical,horizontal] movement
default: [1,0]
OUTPUT:
This function returns no output arguments
See also: multiwaitbar
ModelitUtilRoot
29-Sep-2009 17:38:02
772 bytes
slashpad - complement path with filesep symbol
CALL
str=slashpad(str)
INPUT
str: filename
OUTPUT
setr: filename appended with file separator
ModelitUtilRoot
15-Aug-2008 13:03:27
357 bytes
waitstatus - return false if waitbar has been removed or stopped
CALL:
stop = stopwaitbar(HWIN)
INPUT:
HWIN: <handle> of the figure with the waitbar(s)
HWIN==-1: ignore mode
OUTPUT
stop: TRUE ==> stop
FALSE ==> continue
EXAMPLE:
for k=1:10
hwait=multiwaitbar('loop1',10*k,sprintf('k=%d',k));
for r=1:5:100
if stopwaitbar(hwait),return;end
multiwaitbar('loop2',r,sprintf('r=%d',r));
pause(0.01)
end
end
multiwaitbar('loop1',-1);
multiwaitbar('loop2',-1);
See also
multiwaitbar
closewaitbar
ModelitUtilRoot
07-Oct-2008 09:55:05
920 bytes
convert string to fieldname that can be used in Matlab structure
ModelitUtilRoot
21-Feb-2006 13:10:21
554 bytes
strcol - display string in columns, so that maximum number of rows is not
exceeded
CALL
A=strcol(strs,nRowMax,sepstr)
INPUT
strs : char array to display
nRowMax: maximum acceptable number of rows
sepstr : separator string between columns
OUTPUT
A: strs formatted in columns
EXAMPLE
s=dir;
disp(strcol(char(s.name),5,' '))
ModelitUtilRoot
15-Aug-2008 16:26:38
808 bytes
struct2cellstr - converteer structure naar een cellarray met strings
CALL:
C = struct2cellstr(S, fields)
INPUT:
S: structure
C: (optioneel) cellstr met te gebruiken velden
OUTPUT:
C: cell array met in kolom 1 de veldnamen en
kolom 2 de waarden als string, omgezet met toStr
See also: toStr
ModelitUtilRoot
20-Jan-2008 11:32:34
550 bytes
struct2char - convert single structure to horizontally concatinated char
array
CALL:
report = struct2char(S, flds)
INPUT:
S: structure
flds: cell array of fields to display
OUTPUT:
report : string to display
See also: rightalign, struct2str
ModelitUtilRoot
20-Jan-2008 11:29:20
883 bytes
struct2str - convert struct or structarray to vertically concatenated
table
CALL
[str,hstr,colw] = struct2str(S,flds)
INPUT:
S: structure array
flds: cell array of fields to display
OUTPUT:
str : string to display
hstr : table with column content labels
colw : width of each column
EXAMPLE:
[str,hstr,colw]=struct2str(S);
headers=strvcat(....) %specify headers
titlestr=rightalign(headers,colw)
See also: rightalign, structarray2dlm, struct2char, structarray2table
table2structarray
ModelitUtilRoot
11-Sep-2009 11:25:39
1421 bytes
struct2treemodel - fast way to convert structure to treemodel
CALL:
model = struct2treemodel(S, model, parent)
INPUT:
S:
array of structures
model:
paramter that can be passed to jacontrol with style JXTable
parent:
initial parent
OUTPUT:
model:
paramter that can be passed to jacontrol with style JXTable
EXAMPLE
[table h] = jacontrol('style','JXTable',...
'scrollb',true,...
'ColumnControlVisible',true,...
'SelectionMode',3,...
'showgrid','none');
set(table,'Content',struct2treemodel(S));
ModelitUtilRoot
16-Aug-2008 13:52:44
1722 bytes
struct2varargin - vorm structure om naar parameter/value paren,
parameter namen zijn de veldnamen, values zijn de
bij deze velden horende waarden.
CALL:
args = struct2varargin(S)
INPUT:
S: <struct> om te vormen structure
OUTPUT:
args: <cell array> met parameter/value paren
See also: varargin2struct, struct2cell
ModelitUtilRoot
22-Feb-2007 15:56:00
522 bytes
strvscat - equivalent aan strvcat, maar beschouw '' als lege regel
CALL:
s = strvscat(a,b,c,...)
INPUT:
a,b,c,... <char array>
OUTPUT:
s: <char matrix>
EXAMPLE:
strvscat(strvcat('aa','bb'),'','d')
ans =
aa
bb
d
SEE ALSO: strvcat
ModelitUtilRoot
25-Jul-2006 16:51:58
443 bytes
ticp -
CALL:
[hwin,cp] = ticp(hwin)
[hwin,cp,props] = ticp(hwin)
[hwin,cp] = ticp
[hwin,cp,props] = ticp
INPUT:
hwin: window handle
OUTPUT:
hwin : window handle, defaults to gcf
cp : new pointer
props : other suspended properties
+----WindowButtonMotionFcn
IMPORTANT NOTE
The behavior of this function depends on the number of output
arguments: when nargout>2 also the WindowButtonMotionFcn will be
suspended
EXAMPLE (simple)
ticp;
<various actions>
tocp;
EXAMPLE (thorough)
[hwin,cp]=ticp;
try
<various actions>
tocp(hwin,cp);
catch
tocp(hwin,cp);
rethrow(lasterror);
end
See also: ticpeval
ModelitUtilRoot
21-Oct-2009 06:54:10
1257 bytes
callback that changes pointer while executing
CALL
The function is designed to be used as apart of a HG callback. See
example.
INPUT
obj,event: arg1 and arg2 to be passed to function fp
fp: function handle or function name
varargin: arg3, arg4, etc. To be passes to fp
OUTPUT
none
EXAMPLE 1
OLD CODE:
set(h,'callb',{@myfunc,arg1,arg2}
NEW CODE:
set(h,'callb',{@ticpeval,@myfunc,arg1,arg2}
EXAMPLE 2
OLD CODE:
ticp
result=funcname(input)
tocp
NEW CODE:
result=ticpexec(@funcname,input1,input2)
See also
ticp,tocp, ticpexec
ModelitUtilRoot
02-Apr-2009 01:27:15
1173 bytes
toStr - convert object to string representation
CALL:
value = toStr(value)
INPUT
value: any matlab variable
OUTPUT
string: <string>
corresponding string repersentation
ModelitUtilRoot
16-Aug-2008 15:23:51
2147 bytes
CALL
tocp
tocp(hwin)
tocp(hwin,cp)
tocp(hwin,cp,props)
INPUT
hwin: window handle
cp: new pointer
EXAMPLE
[hwin,cp]=ticp;
<various actions>
tocp(hwin,cp);
OR
ticp;
<various actions>
tocp;
ModelitUtilRoot
20-Feb-2008 10:17:13
536 bytes
transact_gui - display transactions in GUI
CALL:
transact_gui(data,event,fp_getdata,C)
INPUT:
data: if nargin==1: this is the databse
fp_getdata: function pointer to function that returns database structure.
This can be a 3 line function like:
function db=getdata
global MAINWIN %handle of application's main window
db=get(MAINWIN,'userdata')
C: structure with GUI constants (colors, fontsize, etc.). If not specified, default settings are used.
OUTPUT:
Updated comments
This function offers the option of modifying the comments fields of any transaction.
The function mbdstore(db) is used to register these changes.
See mbdundoobj.m : storehandle and storefield should be set when mbdundoobj is called to make this work
Example
MAINWIN=create_fig(...)
data=struct(....)
db=mbdundoobj(data,'storehandle',MAINWIN,'storefield','userdata')
ASCII or HTML report
EXAMPLE
%install:
transact_gui([],[],@fp_getdata,C)
%update from database:
%====================================
%update transaction log
if ~isempty(findobj('tag','win_trnsct'))
transact_gui(db);
end
%====================================
See also: logbookentry
ModelitUtilRoot
26-Jan-2010 17:49:54
17180 bytes
transact_update - display transactions in GUI
SUMMARY
This function checks if the logbook GUI is posted. If it is, it will
update this GUI. transact_update is typically called from
displayfunctions in various applications. The objective is to update
the logbook GUI is you leave this open for a longer time. If the
logbook screen is modal, you may avoid calling this function, because
the database cannot be modified as long as the logbook is posted.
CALL:
transact_update(data,ind)
INPUT:
data: database
ind: subsref structure or string with value 'aal' or 'item'
in case ind='all': all field will be updated
in case ind='item': only the transaction list will be updated
OUTPUT:
geen directe uitvoer
EXAMPLE
transact_update(udnew,'all');
ModelitUtilRoot
26-Jan-2010 17:27:28
7090 bytes
truecolor - Maak een Matlab truecolor map uit een lineaire
color map.
CALL:
cdata = truecolor(x,map)
INPUT:
x: lineaire color map (indices in colormap)
map: colormap waar x naar verwijst
OUTPUT:
cdata: truecolor 3 dim array voor gebruik als cdata property van button.
APPROACH:
x is een vector van indices in map.
Maak een array bestaande uit color vectoren uit map.
Gebruik RESHAPE om dit array om te vormen tot een 3
dimensioneel array.
NOTE:
uint8 is much more compact than double.
Inefficient code:
cdata=truecolor(x,map)
Efficient code:
cdata=truecolor(x,uint8(255*map))
ModelitUtilRoot
09-Jan-2007 15:55:50
955 bytes
UIGETFOLDER Standard Windows browse for folder dialog box.
CALL
folder = uigetfolder(title, initial_path)
INPUT
title
title string (OPTIONAL)
initial_path
initial path (OPTIONAL, defaults to PWD)
OUTPUT
folder
selected folder (empty string if dialog cancelled)
EXAMPLE
folder = uigetfolder - default title and initial path
folder = uigetfolder('Select results folder') - default initial path
folder = uigetfolder([], 'C:\Program Files') - default title
See also: UIGETFILE, UIPUTFILE, UIGETDIR, SELECTDIR
NOTE:
uigetfolder has preceded uigetdir. After appropriate tsting calls to
uigetfolder should be replaced with calls to uigetdir
ModelitUtilRoot
20-Mar-2009 15:57:02
1855 bytes
ModelitUtilRoot
05-Nov-2001 11:31:50
7168 bytes
urlproxyread - haal de inhoud van een url op in de vorm van een string,
stel daarbij ook de proxy settings in.
CALL:
[string, status] =
urlproxyread(urlChar, method, params, proxyadres, proxypoort)
INPUT:
urlChar: <string> met de url
method: <string> mogelijke waarden:
- 'post'
- 'get'
params: <cellstr> met parameter/value combinaties die dienen als
argumenten voor de uit te voeren opdracht
proxyadres: <string> adres van de proxyserver
proxypoort: <integer> poortnummer
OUTPUT:
string: <string>
status: <boolean> true -> gegevens opgehaald
false -> fout opgetreden bij uitvoeren opdracht
See also: urlread, urlwrite
ModelitUtilRoot
13-Jul-2007 10:07:48
4630 bytes
utilspath - return string containing path to utils directort
CALL
pth=utilspath
INPUT
none
OUTPUT
pth: string containing path to utils directort
See also: modelitpath
ModelitUtilRoot
28-Jul-2008 22:03:24
372 bytes
validval - make sure uicontrol with style listbox has valid values for
attributes "val" and "ListboxTop"
CALL
vl=validval(hlist,center)
INPUT
hlist: handle van uicontrol object
center: center selected values (default: 0)
OUTPUT
attributes "val" and "ListboxTop" are modified if needed
ModelitUtilRoot
15-Aug-2008 16:35:44
2493 bytes
varargin2struct - convert value-pair combinations to structure
CALL:
defaultOptions = varargin2struct(defaultOptions,ValidProps,...
PROPERTY1,VALUE1,PROPERTY2,VALUE2,...)
defaultOptions = varargin2struct(defaultOptions,ValidProps,...
PROPERTY1,VALUE1,OPTSTRUCT,...)
INPUT:
defaultOptions: Struct with default values
ValidProps: Allowable fields
PROPERTY,VALUE: Property-Value pairs
and/or
OPTSTRUCT: Option structure that stores property value pairs
+----PROPERTY1=VALUE1
+----PROPERTY2=VALUE2
OUTPUT:
Options: structure waarin alle velden zijn overschreven waarvoor
invoer beschikbaar is
EXAMPLE:
function do_some(varargin)
defaultOptions=struct('a',1,'b',2);
Options=varargin2struct(defaultOptions,fieldnames(defaultOptions),varargin{:});
See also: getproperty
ModelitUtilRoot
26-Jun-2008 16:03:39
3971 bytes
varsize - compute the approximate size occupied by Matlab variable
CALL
sz=varsize(S)
INPUT
S: Matlab variable
OUTPUT
sz: size in number of bytes
ModelitUtilRoot
15-Aug-2008 16:52:05
2034 bytes
width - get matrix width, shortcut for size(str,2)
CALL
w=width(str)
INPUT
str: matrix
OUTPUT
w: matrix width
SEE ALSO: size, length, height
ModelitUtilRoot
16-Sep-2006 10:10:13
237 bytes
windowpostion - convert normalized window position to matlab window pixel coordinates
so that matlab window inclusive border fits area
NOTE: this function will become obsolete. Use windowposV7 instead.
CALL
inner_position=windowpixel(outer_position,menupresence)
INPUT
outer_position; required normalized position of window (with borders)
menupresence [menu toolbar]
menupresence(1):
1: if one or more drop down menus are present
0: otherwise
menupresence(2):
1: if toolbar is present
0: otherwise
OUTPUT
inner_position: Matlab window position vector borders excluded
(if 'menus' vector applies)
wind: structure met de volgende veldeN:
wind.BorderWidth,
wind.TitleHeight,
wind.MenuHeight,
wind.ToolbarHeight
FILES READ FROM
height of border, toolbar and menus are retrieved with GETPRF1
zie screensetting
NOTE
since Matlab V7 the behaviour of Matlab is slightly changed.
This behaviour is observed:
When a toolbar or menubar is added, the height of teh figure
increases. Matlab attempts to extend the figure at the top, but only
does so if there is room available at the desktop.
otherwise the figure will be expanded downwards.
SEE ALSO:
windowposV7
APPROACH
haal de structure 'wind' op met GETPRF1.
De structure 'wind' heeft de volgende velde:
wind.BorderWidth,
wind.TitleHeight,
wind.MenuHeight,
wind.ToolbarHeight
Bepaal inner_position uit outer_pixel_position met de volgende aanpassingen:
Ondergrens : ophogen met BorderWidth
Linkerpositie: ophogen met BorderWidth
Breedte : 2 maal BorderWidth eraf trekken
Hoogte : BorderWidth,TitleHeight,ToolbarHeight eraf trekken%
ModelitUtilRoot
15-Aug-2008 18:33:43
5367 bytes
windowposV7 - position figure on desktop. This function supersedes
windowpos
CALL:
windowposV7(HWIN,NormPos,LMARGE)
INPUT:
HWIN: <handle> figure handle
NormPos <vector of double> required position in normalized coordinates
LMARGE <integer> required margin below (in pixels)
OUTPUT:
this function changes the "outerposition" property of HWIN
REMARK:
First install all menu's and toolbars, as these change the
figure's outerposition, then call this function
EXAMPLE:
figure
Htool = uitoolbar;
uimenu(HWIN,'label','file');
windowposV7(HWIN,[0 0 1 1],20);
See also: windowpos
ModelitUtilRoot
11-Jan-2007 13:01:18
1440 bytes
ModelitUtilRoot
28-Nov-2008 12:51:47
270 bytes
writestr - schrijf character array weg naar file
CALL
writestr(fname,str)
INPUT
fname : weg te schrijven file
str : weg te schrijven char array
OUTPUT
none
EXAMPLE
hwin=ticp
hwait=waitbar(0,'Generate report');
strs={};
for k=1:N
waitbar((k-.5)/N,hwait);
strs{end+1}=makeReportLine(k,....);
end
writestr(fname,strs(:)); <<<Note! vertical concatination is needed to
create multiline report
close(hwait);
tocp
SEE ALSO: readstr
ModelitUtilRoot
20-Oct-2005 15:28:47
1351 bytes
zoomtool - install Modelit zoomtool
CALL
zoomtool
zoomtool(1)
zoomtool(1,option)
zoomtool(N,<property1>,<value1>,<property2>,<value2>,...)
zoomtool('set',hax,<property1>,<value1>,<property2>,<value2>)
INPUT
N: mode of operation
1 install zoom utility (default)
2 zoom in using rbbox
3 zoom back using history of zoom windows
3.1 Maximise X&Y
3.2 Maximise X
3.3 Maximise Y
4 clear zoom history
5 add current zoomwindow to menu
6 toggle sliders on/off
7 delete stored zoomwindows
8 temporaly disable zoom buttond
9 reinstall zoom buttond
10 zoom out (in this case axis&factor are supplied with arg2&arg3)
11 zoom to predefined values
Example: zoomtool(11,'axes',hax,'xlim',xlim,'ylim',ylim)
12 execute callback of x slider
13 execute callback of y slider
14 set up X movie
16 force execution of synchronisation callback
17 pretend current view is result of zoom action (enables undo,
xsync, ysync, scale, move, etc)
18 return zoomhandle
19 change view so that specific hg object fit
20 center view on selected objects, do not resize
option: structure of specific zoom settings
opt.axes : handle van zoom axes
opt.parent : uses 'axes' or 'window'(default: axes)
axes: install buttondown on axes
window: install windowbuttondown on figure
opt.xsync : handles of synchronized x-axes
opt.ysync : handles of synchronized y_axes
opt.patchsync : handle of patch object (usually in overview map)
opt.scale : string containing name of function to call after scaling coordinates
(will also be added to windowresize function)
WARNING: opt.scale installs a resize function on
top of current resize function. when axes is
deleted this resize function is not disabled
opt.move : string containing name of function to call after shifting coordinates
============
When ZOOMING on graph: first call opt.move, then call opt.scale
When RESIZE on window: only call opt.scale
When MOVE on grpah : only call opt.move
============
opt.shiftclick: function that is called after shift+click (windows)
example: opt.shiftclick='rbrect_init(1,0,''line'');'
opt.dblclick : function called when doubleclicked in axes
opt.leftclick : specify function (hint: to prevent zooming at left mouseclick specify ' ')
opt.xmovie : set to 'on' if Xmovie capability is needed (default: 'off')
opt.label : Label van hoofd menu (default: Zoom)
opt.visible : Label voor zoom 'on' or 'off'
opt.fa_zoom : if 1: keep fixed aspect ratio
opt.keypress : if 1: enable zooming by key press (this will
overwite keypress function for current window)
opt.wheel: if 0: disable mousewheel while zooming
if 1: enable mousewheel zooming (standard mode)
if <1: enable mousewheel zooming (slow)
if >1: enable mousewheel zooming (fast)
opt.xrange : zoom range (x-axis)
opt.yrange : zoom range (y-axis)
ModelitUtilRoot
22-Apr-2010 10:11:46
52804 bytes
filechooser - add a filechooser to a frame
CALL:
filechooser(C,D,hframe)
INPUT:
C: <struct>
D: <struct>
hframe: <handle> of the frame to which the filechooser has to be added
fp_getfiletype: (optional) function pointer to function to determine
filetype
OUTPUT:
none, a filechooser is created in the specified frame
ModelitUtilRoot\@filechooser
25-Feb-2010 08:50:44
9048 bytes
get_opt - returns the filechooser's undoredo object CALL: opt = get_opt(obj) INPUT: obj: <filechooser-object> OUTPUT: opt: <undoredo-object>
ModelitUtilRoot\@filechooser
18-Sep-2006 18:28:14
249 bytes
refresh - update the list with files in the filechooser CALL: refresh(obj) INPUT: obj: <filechooser-object> OUTPUT: none, the list with files in the filechooser is updated
ModelitUtilRoot\@filechooser
12-Jun-2008 08:07:28
356 bytes
set_directory - change directory in the filechooser CALL: set_directory(obj,value) INPUT: obj: <filechooser-object> value: <string> new directory OUTPUT: none, the directory and the list with files in the filechooser are updated
ModelitUtilRoot\@filechooser
26-Nov-2009 19:08:28
797 bytes
set_filter - change filefilter CALL: set_filter(obj,value) INPUT: obj: <filechooser-object> value: <string> new filter OUTPUT: none, the filter and the list with files in the filechooser are updated
ModelitUtilRoot\@filechooser
02-Mar-2008 20:35:40
440 bytes
getDirStruct - return struct array with file information in specified
directory
CALL:
fls = getDirStruct(directory,types,D)
INPUT:
directory: <string> with directory to be searched for files
types: <string> filefilter
D: <struct> with at least the field filetypes which contains a
struct array with fields
- label
- filter
- image
and the fields
- dirup: <javax.swing.ImageIcon]
- folder: <javax.swing.ImageIcon]
fp_getfiletype: function pointer to function to determine filetype, if
empty local function is used
OUTPUT:
fls: <array of struct>
See also: filechooser, dir
ModelitUtilRoot\@filechooser\private
02-Mar-2008 20:13:34
3358 bytes
addInstallManual - add installation manual to help menu
INPUT
hlpobj: help center object
menuStr: name for menu
OUTPUT
hlpobj: help center object, install manual has been added
EXAMPLE
hlpobj=addInstallManual(hlpobj,'Installatiehandleiding');
ModelitUtilRoot\@helpmenuobj
08-May-2007 11:43:01
522 bytes
addpdf - add pdf document to help form
CALL
obj=addpdf(obj,name,fname,<PROPERTY>,<VALUE>,<PROPERTY>,<VALUE>,...)
INPUT
obj: helpmenobj object
name: name of document
fname: corresponding filename
<PROPERTY>,<VALUE> pairs
Valid properties:
path: Cell array
path{1}: url (example www.modelit.nl)
path{2}: username ([] if anonymous)
path{3}: password ([] if anonymous)
path{4:end}: path to files
ModelitUtilRoot\@helpmenuobj
13-Jul-2005 00:21:07
1174 bytes
addfile - add file to help form
CALL:
obj = addfile(obj,name,fname,varargin)
INPUT:
obj: object of type helpmenuobj
name: string with description of document; appears in menu
fname: string with url point to file
OUTPUT:
obj: object of type helpmenuobj
ModelitUtilRoot\@helpmenuobj
28-Oct-2009 17:52:36
548 bytes
addhtml - add html document to help form
CALL:
obj = addhtml(obj,name,fname,varargin)
INPUT:
obj: <object> van het type helpmenuobj
name: <string> met de naam van het document
fname: <string> met de bestandsnaam
varargin: <cell array> met mogelijke velden:
- varargin{1} url (example www.modelit.nl)
- varargin{2} username ([] if anonymous)
- varargin{3} password ([] if anonymous)
- varargin{4:end}: pad naar bestand
OUTPUT:
obj: <object> van het type helpmenuobj
ModelitUtilRoot\@helpmenuobj
10-Jan-2006 10:18:14
982 bytes
addinstall - add object of type "install" to help menu object
CALL
obj=addinstall(obj,name,fname,<PROPERTY>,<VALUE>,<PROPERTY>,<VALUE>,...)
INPUT
obj: helpmenobj object
name : (part of) name of installer. For example: name=setup.exe wil
also select install205.exe. The heighest version number will
be selected.
fname: corresponding filename
<PROPERTY>,<VALUE> pairs
Valid properties:
path: Cell array
path{1}: url (example www.modelit.nl)
path{2}: username ([] if anonymous)
path{3}: password ([] if anonymous)
path{4:end}: path to files
EXAMPLE
hlpobj=addinstall(hlpobj,'Meest recente software versie','setupMatlab.exe',...
'path',{'www.modelit.nl','uname','pw','setup'});
See also:
helpmenu
ModelitUtilRoot\@helpmenuobj
13-Dec-2007 19:42:47
3163 bytes
addlabel - add label to help form
CALL
obj=addlabel(obj,labelstr)
INPUT
obj: helpmenu object
labelstr: label string to display
OUTPUT
obj: helpmenu object after update.
ModelitUtilRoot\@helpmenuobj
17-Aug-2008 12:50:24
407 bytes
addpdf - add pdf document to help form
CALL:
obj=addpdf(obj,name,fname,<PROPERTY>,<VALUE>,<PROPERTY>,<VALUE>,...)
INPUT:
obj: helpmenobj object
name: name of document
fname: corresponding filename
<PROPERTY>,<VALUE> pairs
Valid properties:
path: Cell array
path{1}: url (example www.modelit.nl)
path{2}: username ([] if anonymous)
path{3}: password ([] if anonymous)
path{4:end}: path to files
SEE ALSO
addInstallManual
ModelitUtilRoot\@helpmenuobj
09-Jun-2007 12:01:25
904 bytes
addwebsite - add url to help form
CALL
obj=addwebsite(obj,name,url)
INPUT
obj: helpmenobj object
name: name of document
url: website to be opened
ModelitUtilRoot\@helpmenuobj
19-Jun-2005 11:54:51
403 bytes
addzip - add zipped document to help form
CALL
obj=addzip(obj,name,fname,<PROPERTY>,<VALUE>,<PROPERTY>,<VALUE>,...)
INPUT
obj: helpmenobj object
name: name of document
fname: corresponding filename
<PROPERTY>,<VALUE> pairs
Valid properties:
path: Cell array
path{1}: url (example www.modelit.nl)
path{2}: username ([] if anonymous)
path{3}: password ([] if anonymous)
path{4:end}: path to files
SEE ALSO
helpmenu
ModelitUtilRoot\@helpmenuobj
07-Feb-2008 00:43:34
4912 bytes
addzip - add zipped document to help form
CALL
obj=addzip(obj,name,fname,<PROPERTY>,<VALUE>,<PROPERTY>,<VALUE>,...)
INPUT
obj: helpmenobj object
name: name of document
fname: corresponding filename
<PROPERTY>,<VALUE> pairs
Valid properties:
path: Cell array
path{1}: url (example www.modelit.nl)
path{2}: username ([] if anonymous)
path{3}: password ([] if anonymous)
path{4:end}: path to files
SEE ALSO
helpmenu
ModelitUtilRoot\@helpmenuobj
08-Feb-2008 13:05:42
4924 bytes
helpmenu CALL: D = helpmenu(obj,event,hlpobj,C,D) INPUT: obj: <handle> van de aanroepende uitcontrol event: <leeg> standaard matlab callback argument hlpobj: < C: D: OUTPUT: D:
ModelitUtilRoot\@helpmenuobj
17-Dec-2009 07:46:02
9275 bytes
helpmenuobj -
CALL:
obj = helpmenuobj(varargin)
INPUT:
<PROPERTY>,<VALUE> pairs
Valid properties:
path: Cell array
path{1}: url (example www.modelit.nl)
path{2}: username ([] if anonymous)
path{3}: password ([] if anonymous)
path{4:end}: path to files
NOTE
helpmenu method opens help window
EXAMPLE / TEST CODE
HWIN=figure
Htool = uitoolbar(HWIN);
hlpobj=helpmenuobj('path',{'www.modelit.nl','rikz','rikz','MARIA','manuals'});
hlpobj=addlabel(hlpobj,'Algemeen');
hlpobj=addpdf(hlpobj,'Installatie','InstallatieHandleiding.pdf');
hlpobj=addpdf(hlpobj,'Algemene handleiding','Handleiding_final.pdf');
hlpobj=addlabel(hlpobj,'Specifieke onderwerpen');
hlpobj=addpdf(hlpobj,'Inlezen WESP files','Handleiding_Inlezen_WESP.pdf');
% hlpobj=addpdf(hlpobj,'Aanmaken SWAN bodemkaart','Handleiding_Inlezen_WESP.pdf');
hlpobj=addpdf(hlpobj,'CEN editor','GeodataHelp.pdf');
hlpobj=addpdf(hlpobj,'Controle meerjarige reeksen','jaarcontroleHelp.pdf');
hlpobj=addpdf(hlpobj,'Aanmaken contouren voor kustmetingen','Raaicontour.PDF');
hlpobj=addlabel(hlpobj,'Websites');
hlpobj=addwebsite(hlpobj,'Modelit website','www.modelit.nl');
uipushtool(Htool,'cdata',getcdata('help'),...
'separator','on',...
'tooltip','Open help file (download wanneer nodig)',...
'clicked',{@helpmenu,hlpobj});
METHODS
addlabel
addpdf
newcolumn
See also: helpmenu
ModelitUtilRoot\@helpmenuobj
08-Feb-2007 15:35:52
1873 bytes
newcolumn - start new coulum in help menu
CALL
obj=newcolumn(obj)
INPUT
obj: object of class helpmenuobj
OUTPUT
obj: object of class helpmenuobj after updates
ModelitUtilRoot\@helpmenuobj
17-Aug-2008 12:47:41
359 bytes
emptyopt - private functuin for class helpmenuobj
SUMMARY
Create appendable structure for storage in struct array of helpmenu
object.
CALL
S=emptyopt
INPUT
none
OUTPUT
S:
initial structure , that can be appended.
ModelitUtilRoot\@helpmenuobj\private
17-Aug-2008 12:53:20
459 bytes
append - append tables to a table-object CALL: obj = append(obj,varargin) INPUT: obj: <table-object> varargin: <table-object> tables to be appended OUTPUT: obj: <table-object> See also: table, table/deleteRow, table/insertRow
ModelitUtilRoot\@table
19-Sep-2006 22:17:02
982 bytes
composeList - put the table data in a structure which can be used by the
jacontrol/sorttable object
CALL:
Contents = composeList(obj,fields,format)
INPUT:
obj: <table-object>
fields: <cellstring> (optional) with fieldnames, default is all
fieldnames
format: <cellstring> (optional) with formats, see tablefile for possible
values, default is numeric -> number
other -> string
OUTPUT:
Contents: <struct> with fields:
- header
- contents
N.B. this is the format which is needed by the
jacontrol/sorttable to display data
See also: table, tablefile/edit, jacontrol
ModelitUtilRoot\@table
03-Jan-2007 10:17:20
2841 bytes
deleteColumn - delete column(s) from table CALL: obj = deleteColumn(obj,varargin) INPUT: obj: <table-object> varargin: <string> one or more table columnnames OUTPUT: obj: <table-object> See also: table, table/keepColumn, table/renameColumn
ModelitUtilRoot\@table
20-Sep-2006 10:15:54
378 bytes
insertRow - delete one or more rows in a table CALL: obj = deleteRow(obj,rows) INPUT: obj: <table-object> rows: <integer> index of table rows to be deleted OUTPUT: obj: <table-object> See also: table, table/insertRow, table/append
ModelitUtilRoot\@table
16-Sep-2006 14:14:16
424 bytes
disp - display information about a table-object on the console CALL: display(obj) INPUT: obj: <table-object> OUTPUT: none, information about the table-object is displayed on the console See also: table, table/display, disp
ModelitUtilRoot\@table
24-Sep-2006 11:14:46
368 bytes
display - display information about a table-object on the console, called
when semicolon is not used to terminate a statement
CALL:
display(obj)
INPUT:
obj: <table-object>
OUTPUT:
none, information about the table-object is displayed on the console
See also: table, table/disp, display
ModelitUtilRoot\@table
16-Sep-2006 12:25:06
375 bytes
field2index - return the columnnumber of the fields
CALL:
index = field2index(obj,field)
INPUT:
obj: <table-object>
varargin: <string> with columnnames
OUTPUT:
index: <array of integer> with number of column of columnname,
0 if not present in table
See also: table, table/fieldnames, table/renameColumn, table/deleteColumn
ModelitUtilRoot\@table
16-Sep-2006 15:28:24
507 bytes
fieldnames - determine the columnames of the table CALL: fields = fieldnames(obj) INPUT: obj: <table object> OUTPUT: fields: <cellstring> with the fields (columnnames) of the table APPROACH: this function is also important for autocomplete in the command window SEE ALSO: table, fieldnames
ModelitUtilRoot\@table
16-Sep-2006 10:30:06
465 bytes
height - return height of table
CALL
H=height(T)
INPUT
T: table object
OUTPUT
H: number of rows in table
ModelitUtilRoot\@table
17-Aug-2008 10:13:57
401 bytes
insertRow - insert table into table at a specified row CALL: obj = insertRow(obj,row,T) INPUT: obj: <table-object> row: <integer> index of table row where T has to be inserted T: <table-object> table to be inserted OUTPUT: obj: <table-object> See also: table, table/deleteRow, table/append
ModelitUtilRoot\@table
16-Sep-2006 14:51:28
493 bytes
isField - returns true if field is a field of the table-object
returns false otherwise
CALL:
b = isField(obj,field)
INPUT:
obj: <table-object>
field: <string>
OUTPUT:
b: <boolean> true if field is a field of the table-object
false otherwise
ModelitUtilRoot\@table
17-Sep-2006 21:13:38
410 bytes
is_in - determines which rows in obj are equal to rows in obj1
CALL:
f = is_in(obj,obj1,varargin)
INPUT:
obj: <table-object>
obj1: <table-object>
varargin: <string> (optional) restrict comparison to specified columns
default: all fields
OUTPUT:
f: <index> f(i) = j indicates that the ith element in obj is equal to
the jth element in obj1
See also: table, table/selectIndex, table/selectKey, is_in
ModelitUtilRoot\@table
19-Sep-2006 16:46:52
1056 bytes
isempty - returns true if table is empty (i.e. number of rows is zero),
false otherwise
CALL:
b = isempty(obj)
INPUT:
obj: <table-object>
OUTPUT:
b: <boolean>
See also: table, table/size
ModelitUtilRoot\@table
16-Sep-2006 14:20:28
297 bytes
keepColumn - keep specified columns of a table CALL: obj = keepColumn(obj,varargin) INPUT: obj: <table-object> varargin: <string> with columnnames to be kept OUTPUT: obj: <table-object> with only the select columns See also: table, table/deleteColumn, table/renameColumn
ModelitUtilRoot\@table
20-Sep-2006 10:21:10
564 bytes
renameColumn - rename column(s) of table CALL: obj = renameColumn(obj,varargin) INPUT: obj: <table-object> varargin: <string> with (name,newname)-pairs OUTPUT: obj: <table-object> See also: table, table/keepColumn, table/deleteColumn
ModelitUtilRoot\@table
20-Sep-2006 10:16:52
870 bytes
table/rmfield - apply rmfield method to table object
CALL
T=rmfield(T,fieldslist)
T=rmfield(T,field1,field1,...)
INPUT
T:
table object
fieldlist:
cell array containing fieldnames
field1,field2,...:
fields listed seperately
OUTPUT
T:
table object after update
ModelitUtilRoot\@table
17-Aug-2008 10:07:52
455 bytes
select - invoke tableselect method on table object
CALL:
T = select(S,indx,flds)
T = select(S,indx)
T = select(S,flds)
INPUT:
S: table object
indx: index array
flds: cell array
OUTPUT:
T: table object after update
ModelitUtilRoot\@table
17-Aug-2008 10:03:52
366 bytes
selectIndex - select one or more rows in a table
CALL:
obj = selectIndex(obj,index)
INPUT:
obj: <table-object>
index: <integer> index of table rows to be selected
varargin: <string> fieldnames of the table-object -> restrict output to
these columns
OUTPUT:
varargout: <table-object> if nargout == 1 && varargin == 2,4,5,....
varargout: <array> if nargout == varargin
See also: table, table/selectKey, table/is_in
ModelitUtilRoot\@table
17-Sep-2006 21:43:00
1626 bytes
selectKey - select one or more rows in a table with keyvalues
CALL:
varargout = selectKey(obj,key,value,varargin)
INPUT:
obj: <table-object>
key: <cell array> table columnnames
value: <cell array> value to look for in specified columns
varargin: <string> fieldnames of the table-object -> restrict output to
these columns
OUTPUT:
varargout: <table-object> if nargout == 1 && varargin == 2,4,5,....
varargout: <array> if nargout == varargin
See also: table, table/selectIndex, table/is_in
ModelitUtilRoot\@table
19-Sep-2006 16:19:56
1002 bytes
size - determine the size of the table
CALL:
[m,n] = size(obj,dim)
INPUT:
obj: <table-object>
dim: <integer> (optional) possible values:
1 (default) --> vertical (number of rows)
2 --> horizontal (number of columns)
OUTPUT:
m: <integer> number of rows in the table
n: <integer> number of columns in the table
See also: table, table/length
ModelitUtilRoot\@table
19-Sep-2006 22:28:06
942 bytes
sort - sort table according to specified field and direction
CALL:
obj = sort(obj, keys, mode)
INPUT:
obj: <table-object>
keys: <cellstring> columnnames of the table to be sorted
mode: <array of integer> (optional) sorting direction, allowed values:
1 --> 'ascend' (default)
-1 --> 'descend'
OUTPUT:
obj: <table-object> sorted according to specified columns/directions
See also: table
ModelitUtilRoot\@table
25-Apr-2007 16:03:42
1037 bytes
struct - return data component of table object
CALL
S=struct(T)
INPUT
T:
table object
OUTPUT
S:
data content of tabel objeect. this is a table structure
ModelitUtilRoot\@table
17-Aug-2008 10:09:29
262 bytes
subsasgn - assign new values to a table-object
CALL:
obj = subsassgn(obj,ind,data)
INPUT:
obj: <table-object>
ind: <struct array> with fields
- type: one of '.' or '()'
- subs: subscript values (field name or cell array
of index vectors)
data: with the values to be put in the by ind defined fields in the
table-object, allowed types:
- <number>
- <boolean>
- <string> or <cellstr>
OUTPUT:
obj: <table-object>
See also: table, table/subsref, subsasgn
ModelitUtilRoot\@table
17-Sep-2006 21:39:26
1454 bytes
subsref - subscripted reference for a table-object
CALL:
S = subsref(obj,ind)
INPUT:
obj: <table-object>
ind: <struct array> with fields
- type: one of '.' or '()'
- subs: subscript values (field name or cell array
of index vectors)
OUTPUT:
S: <array> with the contents of the referenced field
See also: table, table/subsasgn, subsref
ModelitUtilRoot\@table
17-Sep-2006 21:06:36
535 bytes
table - constructor for table-object
CALL:
obj = table(T)
INPUT:
T: <array of struct>
<structarray>
OUTPUT:
obj: <table-object>
Example:
S(1).number = 1;S(1).string = 'one'
S(2).number = 2;S(2).string = 'two'
T = table(S);
ModelitUtilRoot\@table
22-Oct-2007 18:35:44
773 bytes
unique - restrict the table to unique rows
CALL:
[B,I,J] = unique(obj,varargin)
INPUT:
obj: <table-object>
varargin: <string> (optional) with table columnnames
default: all fields
OUTPUT:
B: <table-object> with only the unique rows
if nargin > 1 restricted to the specified columns
I: <table-object> index such that B = obj(I);
J: <table-object> index such that obj = B(J);
See also: table, table/selectIndex, table/selectKey, unique
ModelitUtilRoot\@table
17-Sep-2006 19:43:10
1340 bytes
emptyRow - make an empty table row for the given table-object
CALL:
obj = emptyRow(obj)
INPUT:
obj: <table object> table-object for which an empty row has to be made
N: <integer> number of emptyrows to generate
OUTPUT:
obj: <table object> a table with zero rows with the same format as the
input table
See also: table, table/append
ModelitUtilRoot\@table\private
17-Sep-2006 20:27:16
742 bytes
isSimilar - return true if obj and obj1 have the same fields and formats
return false otherwise
CALL:
b = isSimilar(obj,obj1)
INPUT:
obj: <table-object>
obj1: <table-object>
OUTPUT:
b: <boolean> true if obj and obj 1 have same fields and format
false otherwise
See also: table, table/append
ModelitUtilRoot\@table\private
17-Sep-2006 20:15:40
940 bytes
istable - determine if S can be converted to a table-object
CALL:
[ok,emsg] = istable(S)
INPUT:
S: <struct> (candidate) table structure
OUTPUT:
ok: <boolean> true if S is table structure,
false otherwise
N: <integer> height of table
See also: table
ModelitUtilRoot\@table\private
13-Nov-2006 16:42:12
1303 bytes
structarray2table - convert array of structures to stucture of arrays
CALL:
T = structarray2table(S)
INPUT:
S: <structarray>
OUTPUT:
T: <struct> structure of arrays
APPROACH:
concatenate numeric fields
convert fields with strings into cellstrings
See also: table
ModelitUtilRoot\@table\private
16-Sep-2006 12:21:36
947 bytes
fr_divider - insert draggable divider
CALL:
[hframe, jseparator] = fr_divider(hparent, varargin)
INPUT:
hparent: handle of parent frame
property,value: property value pair
VALID PROPERTIES:
rank: rank of frame, any number
mode: resize mode, choose one of the following
proportional: increase size of all lower frames
proportional at the cost of aall
above frame
neighbour : increase size of only next frame at
the cost of only the frame directly
above
OUTPUT:
hframe: handle of frame that contains divider
jseparator: jacontrol of type jseparator
ModelitUtilRoot\MBDresizedir
19-Oct-2009 16:00:20
8014 bytes
lm_title - set or get the title of a frame
CALL:
h = lm_title(hframe): return handle to title object
h = lm_title(hframe, str, varargin): install title in frame
INPUT:
hframe: handle of frame
str: title to be displayed in frame
varargin: valid property-value pairs for a uicontrol with style “text”
OUTPUT:
h: handle to title object (uicontrol with “text”)
APPROACH:
(default settings applied)
-1- create a uicontrol with properties:
tag = 'frmTitle'
userdata = <handle of frame>
-2- call mbdlinkobj with properties:
pixpos = <depends on extent of title>
normpos = [0 1 0 0]
clipping = true
clipframe = <parent of frame>
keepypos = true
ModelitUtilRoot\MBDresizedir
11-Aug-2008 22:53:54
2059 bytes
lm_isparent - Find out if a given frame is a child of any of a list of candidate parent frames CALL: istrue = isparentframe(h, hframes) INPUT h: Frame handle (scalar) hframes: Handles of potential parent frames OUTPUT: istrue: Boolean, true if any of hframes is the parent of h See also: lm_parentframe
ModelitUtilRoot\MBDresizedir
11-Aug-2008 22:22:02
1521 bytes
lm_listFrameHandles - retrieve frame handles and frame data for the
specified figure
CALL:
[FrameHandles, FrameData] = lm_listframeHandles(hfig)
INPUT:
hfig: figure handle (defaults to gcf)
OUTPUT:
FrameHandles: Nx1 list of frame handles
FrameData: Nx1 struct array with corresponding application data
ModelitUtilRoot\MBDresizedir
17-Sep-2010 15:34:32
3205 bytes
delete frame and all dependent items
CALL
mbd_deleteframe(hframes)
INPUT
hframes: list of frame handles
OUTPUT
none
See also: lm_deleteframecontent
ModelitUtilRoot\MBDresizedir
04-Aug-2008 15:02:47
1454 bytes
mbd_deleteframecontent - delete contents of frame, but leave frame in place
CALL
mbd_deleteframecontent(hframes,h_excepted)
INPUT
hframes: frame or frames to be deleted (all frames must be a member
of the same figure)
h_excepted: handles of objects that should not be deleted
OUTPUT
none
See also: mbd_deleteframe
ModelitUtilRoot\MBDresizedir
17-Aug-2008 10:28:50
2068 bytes
mbd_initialize_axis -
CALL:
h = mbd_initialize_axis(HWIN,LAYER)
initialize pixel axes for this window
INPUT
HWIN: window for which pixel axes will be set (defaults to gcf)
LAYER: Layer number. If needed, multiple axes objects can be created
to enable plotting in different layers. Frames plotted in the current
axes obscure lines and text objects in other layers
OUTPUT
h: handle of pixel axes for layer LAYER
EXAMPLE
hax=mbd_initialize_axis;
h=text(1,1,'my text','parent',hax);
mbdlinkobj(h,hframe,'pixelpos',[ 10 10 20 20]);
ModelitUtilRoot\MBDresizedir
14-Oct-2006 00:21:48
1437 bytes
mbdarrange - arrange uicontrol objects in rows and columns
CALL
mbdarrange(hframe,property,value,...)
mbdarrange(hframe,propertystruct)
INPUT
input comes in parameter-name,value pairs (parameter name not case
sensitive)
LMARGE, value: margin left (Default =10)
LMARGE is a scalar
RMARGE, value: margin right (Default =10)
RMARGE is a scalar
HMARGE, value: margin between, horizontal (Default =5)
HMARGE may be specified as a vector or scalar
TMARGE, value: margin top (Default =15)
TMARGE is a scalar
BMARGE, value: margin below (Default =6)
BMARGE is a scalar
VMARGE, value: margin between, vertical (Default =1)
VMARGE may be specified as a vector or scalar
PIXELW, value: pixel width of frame (default: compute)
PIXELH, value: pixel height of frame (default: compute)
NORESIZE, value: if set, do not resize frame
HEQUAL, value: if set, distribute Horizontally (default: 0)
VEQUAL, value: if set, distribute Vertically (default: 0)
HNORM, (0,1) if 1: normalize horizontally (use full frame width)
VNORM, (0,1) if 1: normalize vertically (use full frame height)
HCENTER, (0,1,2) if 0: left align
if 1: center items in horizontal direction
if 2: right align items in horizontal direction
NOTE: if HNORM==1 the HCENTER option is ignored
VCENTER, (0,1,2) if 0: top align
if 1: center items in vertical direction
if 2: bottom align
NOTE: if VNORM==1 the VCENTER option is ignored
INDIRECT INPUT
object application data:
keeppixelsize: set to 1 to prevent changing pixelsize
ignoreh : set to 1 to prevent using height to compute row
pixel height
ignorew : set to 1 to prevent using width to compute column
pixel width
pixelpos : if set, pixelpos is not recomputed
normpos : if option HNORM is active, element 3 of normpos is
used (EXCEPTION: if object is spread over more
columns, its normalized width is not used)
object attributes
pos
type
extent
OUTPUT
pixpos:[pixpos(1) pixpos(2] extent van objecten, inclusief marges
raster: Coordinates of raster. Suppose raster is M x N:
raster.x.pixelpos (length N+1)
raster.x.normpos (length N+1)
raster.y.pixelpos (length M+1)
raster.y.normpos (length M+1)
AANPAK
ModelitUtilRoot\MBDresizedir
08-Jun-2010 18:46:17
23598 bytes
mbdcreateexitbutton - Add exit button to frame.
SUMMARY
Add exit button to frame. By default this button is placed in yhe UR
corner of a frame.
CALL
h = mbdcreateexitbutton(hparent,BACKG,callback)
INPUT:
hparent: handle van parent frame
BACKG: color for transparant part of button
callback: additional function to call when frame is closed
OUTPUT:
h: handle van button
EXAMPLE:
positioneer button rechts onder
h=mbdcreateexitbutton(hparent)
h=mbdcreateexitbutton(h_hlp)
setappdata(h,'normpos',[ 1 1 0 0]);
setappdata(h,'pixelpos',[-14 -14 12 12]);
SEE ALSO:
fr_exitbutton
mbdframeonoff
ModelitUtilRoot\MBDresizedir
17-Aug-2008 10:27:33
1873 bytes
mbdcreateframe - maak een mbdresize frame aan
CALL
h=mbdcreateframe(handle,'property', value, 'property', value)
h=mbdcreateframe('property', value, 'property', value)
INPUT
handle: handle van parent frame.
Als er geen parenthandle wordt opgegeven,
dan wordt het current figuur de parent
'property'/value :
Niet default eigenschappen van het frame.
Deze hoeven alleen te worden opgegeven voor de niet-default waarden.
Mogelijke properties:
PROPERTY BETEKENIS
======================================================================
'active' zichtbaarheid van het frame en alle children
true==> zichtbaar
false==> niet zichtbaar
'border' (default=true)
zichtbaarheid van de rand van het frame en het frame zelf
true==> zichtbaar
false==> onzichtbaar
LET OP!: de rand van het frame wordt op de inborder getekend
'enable' enable properties van dit frame en alle children
'exitbutton' (default=false)
aanwezigheid van exitbutton
'exitfunction' functie die wordt aangeroepen indien frame gedeactiveerd wordt
'lineprops' (default: [])
Eigenschappen van de line die het frame markeert
zie Matlab - line voor meer informatie
VOORBEELD: ...,'lineprops',mbdlineprops('color','k','shadowed',0 ),...
...,'lineprops',mbdlineprops,...
'shadowed' eigenschap: (default true)
Wanneer property lineprops is gezet zorgt shadowed ervoor
dat er consequent een schaduw wordt getekend.
'maxpixelsize' (default=[inf inf])
When pixelsize is set this defines the maxvalue (per
dimension)
'minmarges' marge in pixels voor dit frame [LINKS ONDER RECHTS BOVEN]
TEN OPZICHTE VAN PARENT FRAME!! (dus niet tov van child frames)
LET OP!: de rand van het frame wordt op de inborder getekend
'normposition' positie van topframe tov van figure (normalized)
'normsize' (default=[1 1])
Afmetingen van het frame in genormaliseerde coordinaten
LET OP: door de pixelsize NaN op te geveb wordt deze berekend
als de som van de ACTIEVE subframes
'parenthandle' handle van parent frame (meestal alseerste argument doorgegeven)
nodig indien een top frame in een niet-current scherm wordt aangemaakt
'patchprops' (default: [])
Eigenschappen van de patch die het frame markeert
zie Matlab - patch voor meer informatie
VOORBEELD: ...,'patchprops',mbdpatchprops('facec',C.WINCOLOR,'linew',1),...
'pixelposition' positie van topframe tov van figure (in pixels)
'pixelsize' (default=[0 0])
Afmetingen van het frame in pixel coordinaten
'rank' (default=0)
Plaats van het scherm:
bij horizontale splitsing: hoe hoger hoe meer naar rechts
bij verticale splitsing: hoe hoger hoe meer naar beneden
'slider' handle van een slider object
de children frames en objecten worden afhankelijk van de
slider instelling geplaatst.
'splithor' (default= omgekeerde van splitsings richting van parent)
true==> splits horizontaal
false==> splits verticaal
'title' af te drukken titel string
OUTPUT
h: de handle van het gemaakte frame
EXAMPLES
Example -1-
Create a figure that sizes to fit contents exactly:
hfig=mbdcreateframe(HWIN,'splithor',0,'pixelsize',[NaN NaN],'normsize',[0 0]);
Example -2-
Create a figure that sizes to fit contents but does not shrink the figure:
hfig=mbdcreateframe(HWIN,'splithor',0,'pixelsize',[NaN NaN],'normsize',[1 1]);
ModelitUtilRoot\MBDresizedir
08-Mar-2008 12:49:30
20660 bytes
Creeer een frame dat kan worden geminimaliseerd
CALL
[h_ItemFrame,h_frame]=mbddoubleframe(h_parent,titlestr,outer_frame_opt,inner_frame_opt)
INPUT
h_parent : parent frame
titlestr : titel
outer_frame_opt : cell array met opties voor buitenste frame
Default properties:
'normsize',[1 0],...
'pixelsize',[0 NaN],...
'border',0,...
'splithor',0
inner_frame_opt : cell array met opties voor binneste frame
Default properties:
'normsize',[1 1],...
'lineprops',mbdlineprops,...
'active',1
OUTPUT
h_ItemFrame: frame waarin getekend kan worden
h_frame: buitenste frame
ZIE OOK
equivalent aan mbdcreateframe
EXAMPLE
mbddoubleframe(h_parent,'Edit object',{'rank',1,'tag','EDITOR'},{})
mbddoubleframe(h_parent,'Edit object',{'tag','EDITOR'})
mbddoubleframe(h_parent,'Edit object')
ModelitUtilRoot\MBDresizedir
15-Aug-2008 18:35:09
3041 bytes
mbdinnerpixelsize - Change pixelsize property of frame SUMMARY Change pixelsize property of frame so that the size of the innerframe matches a given size. This utility is useful if the size of what goes into the frame is known and one wants to shrink the outer frame so that it exactly fits its contents. CALL: outbordersize = mbdinnerpixelsize(hframe, innerborderpixelsize) INPUT: hframe: handle van MBD frame innerpixelsize: required pixel size (inner border) OUTPUT outbordersize: computed outer border size
ModelitUtilRoot\MBDresizedir
13-Oct-2009 19:20:08
979 bytes
lm_lineprops - return default line options for frame border (line)
SUMMARY
This function returns a structure tha can be passed to the Matlab
"line" command. If called without arguments it will produce the
settings that are needed to plot a "standard" border.
Any property value pair that can be passed to the line command can
also be passed to lm_lineprops.
Additionally the argument "shadowed" may be passed. This argument
tells the layout manager to plot not one, but two lines. This results
in a shadow effect.
CALL
s=lm_lineprops(property,value,...)
INPUT
property, value: any line property
'shadowed',B: B=1==> apply shadow
B=0==> do not apply
See also: lm_patchprops, lm_createframe
ModelitUtilRoot\MBDresizedir
17-Aug-2008 10:33:55
2060 bytes
mbdlinkobj - linkt een object aan een mbdframe
CALL
mbdlinkobj(hobj, hframe, property, value, property, value,...)
mbdlinkobj(hobj, hframe, struct(property, value))
INPUT
hobj : object or array of handles or jacontrol object
hframe: frame to link to
property: char string containg property name
value: corresponding property value. Note: property/value
combinations may als be passed on as a tructure.
<propertye, value>
clipframe
see mbdresize
clipping [0 or 1]
clip object if out of frame borders
enable
Default: enable status is copied from application data
"enable" from frame.
Note
<on> and <off> is supported. <inactive> is not supported.
Object | Frame
enabled | enabled status
'Frame=on' 'Frame=off' 'Frame=inactive'
==========================================
0 ==> 'off' 'off' <not supported>
1 ==> 'on' 'off' <not supported>
2 ==> 'inactive' 'off' <not supported>
3 ==> 'off' 'off' <not supported>
4 ==> 'on' 'on' <not supported>
5 ==> 'inactive' 'inactive' <not supported>
keeppixelsize : is 1 maintain pixel height and width while alignigning in matrix
keepypos: if 1 ==> position of slider has no effect on this
object
normpos [X,Y,WIDTH,HEIGHT]
normalized position relative to LL corner of frame
pixelpos [X,Y,WIDTH,HEIGHT]
pixel position relative to LL corner of frame
visible
0 ==> do not show
1 ==> show
row: align on position (row,col) in matrix
col: align on position (row,col) in matrix
OUTPUT
none
AFFECTED OBJECTS
-1- affected application data of frame:
when an object is linked to a frame, this will affect the following
fields of application data of this frame:
uichildren
textchildren
children
javachildren
-2- affected properties of object:
parent: when object-parent differs from frame-parent
units : set to "pixel" when object is of type
text,uicontainer,hgjavacomponent
-3- affected application data of object, required:
normpos
pixelpos
visible
enable
clipping
keepypos
-4- affected application data of object, optional:
clipframe
row
col
keeppixelsize
ModelitUtilRoot\MBDresizedir
20-Mar-2008 19:33:01
11693 bytes
lm_linkslider2frame - make y-position of frame content dependent on a
vertical slider
CALL:
lm_linkslider2frame(hslid, targetframe)
INPUT:
hslid: handle of uicontrol of style "slider"
targetframe: handle of target frame. The contents of this frame can be
moved by using the slider
OUTPUT:
no direct output. The slider handle is stored in the target frame in the
property "slider"
ModelitUtilRoot\MBDresizedir
19-Mar-2010 09:43:20
2739 bytes
mbdpatchprops - return default line options for frame border (patch)
SUMMARY
This function returns a structure tha can be passed to the Matlab
"patch" command. If called without arguments it will produce the
settings that are needed to plot a "standard" border for a frame that
is showedusing a patch object.
The advantage of using a patch is that it provides a background color
(like a uicontrol frame) but does not obscure axeses and objects
plotted in it.
CALL
s=mbdpatchprops(varargin)
INPUT
property, value: any patch property
See also: lm_lineprops, lm_createframe
ModelitUtilRoot\MBDresizedir
17-Aug-2008 10:36:50
2060 bytes
lm_pixelsize - get pixelsize of frame CALL: pixelsize = lm_pixelsize(hframe) INPUT: hframe: frame handle OUTPUT: pixelsize: vector [height, width] with the pixelsize of the frame EXAMPLE: position figure in the middle of the screen HWIN = figure; hmain = lm_createframe(HWIN,'splithor',0,'pixelsize',[300 200]); lm_resize(HWIN); pixelsize = lm_pixelsize(hmain); scrsz = get(0,'screensize'); mid = scrsz(3:4)/2; set(HWIN,'pos',[mid 0 0]+[-pixelsize/2 pixelsize]);
ModelitUtilRoot\MBDresizedir
12-Aug-2008 12:12:34
1272 bytes
lm_resize - resize the figure and position all the objects it contains CALL: lm_resize(hfig, event) INPUT: hfig : figure handle event: standard Matlab callback argument, not used OUTPUT: All frames created with "lm_createframe" and all the objects linked to these frames with "lm_linkobj" are positioned in the figure. EXAMPLE: lm_resize(HWIN); set(HWIN,'Visible','on','ResizeFcn',@lm_resize); APPROACH: - maak een lijst van alle zichtbare mbdresize frames - zet alle objecten die in de MBD frames zitten uit - zet alle exit buttons uit - pas de sliderheight aan als de hoogte van het figuur groter is dan de sliderheight - pas de sliderheight aan - bereken de nieuwe posities van de mbdresize frames (inclusief de exit buttons) - zet de exit buttons van de zichtbare MBD frames weer aan - bepaal de sliderpositie - scroll het scherm tot de slider value weer klopt met het zichtbare scherm - bepaald voor alle zichtbare mbdresize frames de posities van de bijhorende objecten
ModelitUtilRoot\MBDresizedir
17-Apr-2010 09:53:46
37783 bytes
lm_sortframes - create a sorted list of frames which are create with
lm_createframe, the frames are sorted based on level in hierarchy, parent
and rank
CALL:
[FrameData, parentIndex] = lm_sortframes(hfig)
INPUT:
hfig: figure handle
OUTPUT:
FrameData: structarray with collected information per frame
+----stack[]: debug informatie
| +----file (char array)
| +----name (char array)
| +----line (double)
+----treetop (logical)
+----parenthandle (double)
+----rank (double)
+----normsize (double array)
+----pixelsize (double array)
+----maxpixelsize (double array)
+----normposition (double array)
+----pixelposition (double array)
+----enable (logical)
+----splithor (double)
+----border (double)
+----exitbutton (logical)
+----exitfunction (char)
+----active (logical)
+----exitbuttonhandle (double)
+----minmarges (double array)
+----children (double)
+----textchildren (double)
+----javachildren (double)
+----uichildren (double)
+----slider (double)
+----patchhandle (double)
+----linehandle (double)
+----shadowlinehandle (double)
+----level (double)
+----showslider (double)
+----handle (double)
+----inborderpos (double)
+----outborderpos (double)
+----activenode (double)
+----enablednode (logical)
parentIndex: list with the parent indices corresponding to each element
in FrameData
APPROACH:
- maak een lijst van alle frames
- verwijder alle frames die niet met createMBDframe zijn aangemaakt uit de lijst
- bepaal het level van de overgebleven frames
- sorteer deze levels oplopend
- bereken eigenschap "pixelsize"
ModelitUtilRoot\MBDresizedir
12-Aug-2008 14:59:38
11163 bytes
lm_childframes - list the child-frames directly below a given frame CALL: h_frames = lm_childframes(hframe) INPUT: hframe: handle of the parent frame (scalar) OUTPUT: h_frames: list of handles of child frames
ModelitUtilRoot\MBDresizedir
11-Aug-2008 22:44:06
806 bytes
callback - Internal component of dateselector object.
CALL/INPUT/OUTUT:
not for external use
ModelitUtilRoot\MBDresizedir\@dateselector
17-Aug-2008 10:47:42
232 bytes
dateselector - create dateselector component (Calendar object)
CALL
obj=dateselector(property, value,...)
INPUT
PROPERTY DEFAULT MEANING
Parent gcf Parent of Calendar frame. May be other frame or
figure handle
Backg figcolor Color used for pseudo transparant items
Tag '' Tag of Calendar frame
Rank 1 Rank of Calendar frame
Value now datenum value
Maximize 1 if maximized: show calendar, otherwise show date
only
Callback [] Callback function pointer. This function will be
called when user selects a date. Arguments:
arg1: object
arg2: event
+----calendar: user clicked on calendar
+----month: user clicked on month
+----year: user changed year field
+----date: user changed date field
arg2: value. The current date
OUTPUT
obj: Calendar object. The private fields of the object mainly contain
| information on handles. Object data subject to
| change (like value and maximize property) are
| stored as application data.
+----h_all: handle of frame object
+----h_hdr: handle of header frame
+----h_daytype: handle of daytype field
+----h_day: handle of date field
+----h_mnth: handle of month field
+----h_yr: handle of year field
+----h_expand: handle of expand button
+----BackgroundColor: pseudo transparant color (identical to
| background)
+----h_cal: handle of calendar table frame
+----h_dates: [6x7 double] handles of date buttons
OBJECT METHODS
obj.set
obj.get
obj.callback
SEE ALSO
selectdate
EXAMPLE:
if nargin==0
%test
NM='test window';
delete(findobj('name',NM));
figure('name',NM);
h_fr=mbdcreateframe(gcf,'splithor',0);
mbdcreateframe(h_fr,'border',1,'splithor',0,'rank',2,'normsize',[1 10]);
%initialize
dateselector('Parent',h_fr,'tag','Example');
set(gcf,'resizef',@mbdresize);
%update
for k=1:100
h_frame =findobj('tag','Example');
obj =get(h_frame,'userdata');
curvalue =get(obj,'value');
set(obj,'value',curvalue+1);
pause(.1);
end
return
end
Create dateselector object and set user defined fixed properies:
ModelitUtilRoot\MBDresizedir\@dateselector
17-Aug-2008 10:39:14
7921 bytes
dateselector/get - get property of calendar object
CALL
prop_value=get(obj,prop_name)
INPUT
prop_name:
Name of property that is retreived
OUTPUT
prop_value:
Value of property that is retreived
SEE ALSO
dateselector/get
ModelitUtilRoot\MBDresizedir\@dateselector
17-Aug-2008 10:44:05
1164 bytes
dateselector/set - change property of calendar object
CALL
set(obj,property,value,...)
set(obj,propertystruct,...)
INPUT
<option>,<argument>
See dateselector constructor for list of possible options
OUTPUT
obj: Calendar object after update
SEE ALSO
dateselector/get
ModelitUtilRoot\MBDresizedir\@dateselector
17-Aug-2008 10:42:02
3335 bytes
getDefopt - Private function of dateselector object
CALL/INPUT/OUTUT:
not for external use
ModelitUtilRoot\MBDresizedir\@dateselector\private
17-Aug-2008 10:47:48
347 bytes
lm_arrange - arrange uicontrol objects in rows and columns
CALL
lm_arrange(hframe,varargin)
INPUT
input comes in parameter-name,value pairs (parameter name not case
sensitive)
LMARGE, value: margin left (Default =10)
LMARGE is a scalar
RMARGE, value: margin right (Default =10)
RMARGE is a scalar
HMARGE, value: margin between, horizontal (Default =5)
HMARGE may be specified as a vector or scalar
TMARGE, value: margin top (Default =15)
TMARGE is a scalar
BMARGE, value: margin below (Default =6)
BMARGE is a scalar
VMARGE, value: margin between, vertical (Default =10)
VMARGE may be specified as a vector or scalar
PIXELW, value: pixel width of frame (default: compute)
PIXELH, value: pixel height of frame (default: compute)
NORESIZE, value: if set, do not resize frame
HEQUAL, value: if set, distribute Horizontally (default: 0)
VEQUAL, value: if set, distribute Vertically (default: 0)
HNORM, (0,1) if 1: normalize horizontally (use full frame width)
VNORM, (0,1) if 1: normalize vertically (use full frame height)
HCENTER, (0,1,2) if 0: left align
if 1: center items in horizontal direction
if 2: right align items in horizontal direction
NOTE: if HNORM==1 the HCENTER option is ignored
VCENTER, (0,1,2) if 0: top align
if 1: center items in vertical direction
if 2: bottom align
NOTE: if VNORM==1 the VCENTER option is ignored
INDIRECT INPUT
object application data (See also lm_set):
keeppixelsize: set to 1 to prevent changing pixelsize
ignoreh : set to 1 to prevent using height to compute row
pixel height
ignorew : set to 1 to prevent using width to compute column
pixel width
pixelpos : if set, pixelpos is not recomputed
normpos : if option HNORM is active, element 3 of normpos is
used (EXCEPTION: if object is spread over more
columns, its normalized width is not used)
object attributes
pos
type
extent
OUTPUT
pixpos:[pixpos(1) pixpos(2] extent van objecten, inclusief marges
raster: Coordinates of raster. Suppose raster is M x N:
raster.x.pixelpos (length N+1)
raster.x.normpos (length N+1)
raster.y.pixelpos (length M+1)
raster.y.normpos (length M+1)
AANPAK
ModelitUtilRoot\MBDresizedir\LayoutManager
15-Apr-2010 10:23:34
3670 bytes
lm_childframes - haal de frames op die direct onder het opgegeven
frame hangen
CALL:
h_frames = lm_childframes(hframe)
INPUT:
hframe: <handle> van het parent frame
OUTPUT:
h_frames <handle> van de children frames van hframe
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
733 bytes
lm_createframe - maak een lm_resize frame aan
CALL
h=lm_createframe(handle,'property', value, 'property', value)
h=lm_createframe('property', value, 'property', value)
INPUT
handle: handle van parent frame.
Als er geen parenthandle wordt opgegeven,
dan wordt het current figuur de parent
'property'/value :
Niet default eigenschappen van het frame.
Deze hoeven alleen te worden opgegeven voor de niet-default waarden.
Mogelijke properties:
PROPERTY BETEKENIS
======================================================================
'active' zichtbaarheid van het frame en alle children
true==> zichtbaar
false==> niet zichtbaar
'border' (default=true)
zichtbaarheid van de rand van het frame en het frame zelf
true==> zichtbaar
false==> onzichtbaar
LET OP!: de rand van het frame wordt op de inborder getekend
'enable' enable properties van dit frame en alle children
'exitbutton' (default=false)
aanwezigheid van exitbutton
'exitfunction' functie die wordt aangeroepen indien frame gedeactiveerd wordt
'lineprops' (default: [])
Eigenschappen van de line die het frame markeert
zie Matlab - line voor meer informatie
VOORBEELD: ...,'lineprops',lm_lineprops('color','k','shadowed',0 ),...
...,'lineprops',lm_lineprops,...
'shadowed' eigenschap: (default true)
Wanneer property lineprops is gezet zorgt shadowed ervoor
dat er consequent een schaduw wordt getekend.
'maxpixelsize' (default=[inf inf])
When pixelsize is set this defines the maxvalue (per
dimension)
'minmarges' marge in pixels voor dit frame [LINKS ONDER RECHTS BOVEN]
TEN OPZICHTE VAN PARENT FRAME!! (dus niet tov van child frames)
LET OP!: de rand van het frame wordt op de inborder getekend
'normposition' positie van topframe tov van figure (normalized)
'normsize' (default=[1 1])
Afmetingen van het frame in genormaliseerde coordinaten
LET OP: door de pixelsize NaN op te geveb wordt deze berekend
als de som van de ACTIEVE subframes
'parenthandle' handle van parent frame (meestal alseerste argument doorgegeven)
nodig indien een top frame in een niet-current scherm wordt aangemaakt
'patchprops' (default: [])
Eigenschappen van de patch die het frame markeert
zie Matlab - patch voor meer informatie
VOORBEELD: ...,'patchprops',lm_patchprops('facec',C.WINCOLOR,'linew',1),...
'pixelposition' positie van topframe tov van figure (in pixels)
'pixelsize' (default=[0 0])
Afmetingen van het frame in pixel coordinaten
'rank' (default=0)
Plaats van het scherm:
bij horizontale splitsing: hoe hoger hoe meer naar rechts
bij verticale splitsing: hoe hoger hoe meer naar beneden
'slider' handle van een slider object
de children frames en objecten worden afhankelijk van de
slider instelling geplaatst.
'splithor' (default= omgekeerde van splitsings richting van parent)
true==> splits horizontaal
false==> splits verticaal
'title' af te drukken titel string
OUTPUT
h: de handle van het gemaakte frame
EXAMPLES
Example -1-
Create a figure that sizes to fit contents exactly:
hfig=lm_createframe(HWIN,'splithor',0,'pixelsize',[NaN NaN],'normsize',[0 0]);
Example -2-
Create a figure that sizes to fit contents but does not shrink the figure:
hfig=lm_createframe(HWIN,'splithor',0,'pixelsize',[NaN NaN],'normsize',[1 1]);
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
5180 bytes
delete frame and all dependent items lookfor child frames
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
478 bytes
Creeer een frame dat kan worden geminimaliseerd
INPUT
h_parent : parent frame
titlestr : titel
outer_frame_opt : cell array met opties voor buitenste frame
Default properties:
'normsize',[1 0],...
'pixelsize',[0 NaN],...
'border',0,...
'splithor',0
inner_frame_opt : cell array met opties voor binneste frame
Default properties:
'normsize',[1 1],...
'lineprops',lm_lineprops,...
'active',1
OUTPUT
h_ItemFrame: frame waarin getekend kan worden
h_frame: buitenste frame
ZIE OOK
equivalent aan lm_createframe
EXAMPLE
lm_doubleframe(h_parent,'Edit object',{'rank',1,'tag','EDITOR'},{})
lm_doubleframe(h_parent,'Edit object',{'tag','EDITOR'})
lm_doubleframe(h_parent,'Edit object')
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
1463 bytes
lm_exitbutton - maak voor een frame een aparte en willekeurig te plaatsen
exit button aan default wordt de button rechts boven geplaatst.
INPUT:
hparent: handle van parent frame
BACKG: color for transparant part of button
callback: eventueel extra aan te roepen callback bij sluiten frame
OUTPUT:
h: handle van button
EXAMPLE:
positioneer button rechts onder
h=lm_exitbutton(hparent)
h=lm_exitbutton(h_hlp)
setappdata(h,'normpos',[ 1 1 0 0]);
setappdata(h,'pixelpos',[-14 -14 12 12]);
See also: lm_exittext, lm_frameonoff
ModelitUtilRoot\MBDresizedir\LayoutManager
15-Sep-2008 10:29:30
1052 bytes
lm_initaxes -
CALL:
h = lm_initaxes(HWIN,LAYER)
initialize pixel axes for this window
INPUT
HWIN: window for which pixel axes will be set (defaults to gcf)
LAYER: Layer number. If needed, multiple axes objects can be created
to enable plotting in different layers. Frames plotted in the current
axes obscure lines and text objects in other layers
OUTPUT
h: handle of pixel axes for layer LAYER
EXAMPLE
hax=lm_initaxes;
h=text(1,1,'my text','parent',hax);
lm_linkobj(h,hframe,'pixelpos',[ 10 10 20 20]);
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:56:52
1061 bytes
Shadowed : 1==> apply shadow
0==> do not apply
SEE ALSO: lm_patchprops
s=struct('xdata',[],'ydata',[],'facecolor','none','hittest','off','faceli','gouraud');
s=struct('XData',[],'YData',[],'Color',[ 0.6758 0.6602 0.6016],'HitTest','off','LineWidth',1,'Shadowed',1); %hoofdletters zijn belangrijk ivm mbd_frame_edit
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
750 bytes
lm_linkobj - linkt een object aan een mbdframe
CALL
lm_linkobj(hobj, hframe, property, value, property, value,...)
lm_linkobj(hobj, hframe, struct(property, value))
INPUT
hobj : object or array of handles or jacontrol object
hframe: frame to link to
property: char string containg property name
value: corresponding property value. Note: property/value
combinations may als be passed on as a tructure.
<propertye, value>
clipframe
see lm_resize
clipping [0 or 1]
clip object if out of frame borders
enable
Default: enable status is copied from application data
"enable" from frame.
Note
<on> and <off> is supported. <inactive> is not supported.
Object | Frame
enabled | enabled status
'Frame=on' 'Frame=off' 'Frame=inactive'
==========================================
0 ==> 'off' 'off' <not supported>
1 ==> 'on' 'off' <not supported>
2 ==> 'inactive' 'off' <not supported>
3 ==> 'off' 'off' <not supported>
4 ==> 'on' 'on' <not supported>
5 ==> 'inactive' 'inactive' <not supported>
keeppixelsize : is 1 maintain pixel height and width while alignigning in matrix
keepypos: if 1 ==> position of slider has no effect on this
object
normpos [X,Y,WIDTH,HEIGHT]
normalized position relative to LL corner of frame
pixelpos [X,Y,WIDTH,HEIGHT]
pixel position relative to LL corner of frame
visible
0 ==> do not show
1 ==> show
row: align on position (row,col) in matrix
col: align on position (row,col) in matrix
OUTPUT
none
AFFECTED OBJECTS
-1- affected application data of frame:
when an object is linked to a frame, this will affect the following
fields of application data of this frame:
uichildren
textchildren
children
javachildren
-2- affected properties of object:
parent: when object-parent differs from frame-parent
units : set to "pixel" when object is of type
text,uicontainer,hgjavacomponent
-3- affected application data of object, required:
normpos
pixelpos
visible
enable
clipping
keepypos
-4- affected application data of object, optional:
clipframe
row
col
keeppixelsize
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
4131 bytes
lm_linkslider2frame - maak y-positie van content van frame afhankelijk
van slider
INPUT
hslid: handle van slider object
targetframe: target frame. De inhoud van dit frame wordt verticaal
veplaatst als functie van slider
OUTPUT
geen directe output. slider handle wordt opgeslagen in target frame
onder property "slider"
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
816 bytes
SEE ALSO: lm_lineprops
s=struct('xdata',[],'ydata',[],'facecolor','none','hittest','off','faceli','gouraud');
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
525 bytes
get pixelsize of frame that depends on children
INPUT
hframe: frame handle
OUTPUT
pixelsize: pixel size of frame
EXAMPLE 1: keep centre of figure unchanged
hmain=lm_createframe(HWIN,'splithor',0,'pixelsize',[NaN NaN],'norms',[0 0]);
lm_createframe(hmain,'pixelsize',[47 20],'norms',[1 1]);
lm_createframe(hmain,'pixelsize',[20 58],'norms',[1 1]);
pixelsize=lm_pixelsize(hframe);
pos=get(HWIN,'pos');
mid=pos(1:2)+pos(3:4)/2';
set(HWIN,'pos',[mid 0 0]+[-pixelsize/2 pixelsize]);
lm_resize
EXAMPLE 2: position figure in the middle of the screen
pixelsize=lm_pixelsize(h_main);
scrsz=get(0,'screens');
mid=scrsz(3:4)/2;
set(HWIN,'pos',[mid 0 0]+[-pixelsize/2 pixelsize]);
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
1214 bytes
lm_resize - resize het figuur en alle objecten in het figuur
CALL
callback functie voor ResizeFcn
INPUT
hfig : figure handle
event: not used
OUTPUT
All frames created with "lm_createframe" and all objects linked to
frames with "lm_linkobj" are positioned in a figure.
EXAMPLE
lm_resize(HWIN);
set(HWIN,'Vis','on','ResizeFcn',@lm_resize);
AANPAK
- maak een lijst van alle zichtbare lm_resize frames
- zet alle objecten die in de MBD frames zitten uit
- zet alle exit buttons uit
- pas de sliderheight aan als de hoogte van het figuur groter is dan de sliderheight
- pas de sliderheight aan
- bereken de nieuwe posities van de lm_resize frames (inclusief de exit buttons)
- zet de exit buttons van de zichtbare MBD frames weer aan
- bepaal de sliderpositie
- scroll het scherm tot de slider value weer klopt met het zichtbare scherm
- bepaald voor alle zichtbare lm_resize frames de posities van de bijhorende objecten
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
1468 bytes
lm_set - change property of individual object after properties for a
group have been set lm_link_obj.
CALL
lm_set([h1,h2..],...
'normpos',NP,...
'pixelpos',PP,...
'visible',V,...
'enable',E,...
'clipping',C,...
'clipframe',CF,...
'keepypos',K,...
'keeppixelsize',KP)
INPUT:
[h1,g2,..]
Array of handles for which properties will be changed
Property-Value pairs
See lm_linkobj for descriptions
See also: lm_linkobj
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2010 15:22:36
2131 bytes
sortframes - maak een gesorteerde lijst van lm_resize frames
CALL
h=sortframes
INPUT
OUTPUT
FrameData[]: verzamelde informatie per frame
+----stack[]: debug informatie
| +----file (char array)
| +----name (char array)
| +----line (double)
+----treetop (logical)
+----parenthandle (double)
+----rank (double)
+----normsize (double array)
+----pixelsize (double array)
+----maxpixelsize (double array)
+----normposition (double array)
+----pixelposition (double array)
+----enable (logical)
+----splithor (double)
+----border (double)
+----exitbutton (logical)
+----exitfunction (char)
+----active (logical)
+----exitbuttonhandle (double)
+----minmarges (double array)
+----children (double)
+----textchildren (double)
+----javachildren (double)
+----uichildren (double)
+----slider (double)
+----patchhandle (double)
+----linehandle (double)
+----shadowlinehandle (double)
+----level (double)
+----showslider (double)
+----handle (double)
+----inborderpos (double)
+----outborderpos (double)
+----activenode (double)
+----enablednode (logical)
parentIndex[]: corresponderende lijst met parent indices
AANPAK
- maak een lijst van alle frames
- verwijder alle frames die niet met createMBDframe zijn aangemaakt uit de lijst
- bepaal het level van de overgebleven frames
- sorteer deze levels oplopend
- bereken eigenschap "pixelsize"
ModelitUtilRoot\MBDresizedir\LayoutManager
29-Jun-2008 23:54:19
3296 bytes
WIJZ ZIJPP 20100415 This file was modified to obtain a more robust behaviour in compiled mode Preference files are now stored in local directory (pwd) File is identical to "addpref" but all calls to "prefutils" are replaced by calls to "prefutilsModelit"
ModelitUtilRoot\PublicFiles
15-Apr-2010 11:45:39
1535 bytes
WIJZ ZIJPP 20100415 This file was modified to obtain a more robust behaviour in compiled mode Preference files are now stored in local directory (pwd) File is identical to "getpref" but all calls to "prefutils" are replaced by calls to "prefutilsModelit"
ModelitUtilRoot\PublicFiles
15-Apr-2010 11:42:10
4014 bytes
WIJZ ZIJPP 20100415 This file was modified to obtain a more robust behaviour in compiled mode Preference files are now stored in local directory (pwd) File is identical to "ispref" but all calls to "prefutils" are replaced by calls to "prefutilsModelit"
ModelitUtilRoot\PublicFiles
15-Apr-2010 11:40:20
1614 bytes
plot_geo - plot topografie en kaartbladen
CALL
plot_geo(h_kaart,kaartblad,setlabels,coord,mode)
INPUT
h_kaart <axes handle>:
handles of axes objects that will hold plotted objects
defaults to gca
kaartblad <logical>:
if True, plot "kaartblad" layer
defaults to True
setlabels <logical>
if True add labels that will appear in legend
defaults to True
coord <string>
coordinate system of axes "h_kaart". If anaything else then RD,
coordinates will be transformed.
defaults to 'RD'
mode <string>
if mode=="noordzee" additional stylistic info is plotted
mode defaults to nederland"
INDIREC INPUT
File: 'Kaartbladen.w3h'
OUTPUT
This function returns no output arguments
ModelitUtilRoot\PublicFiles
13-Apr-2010 19:06:35
4842 bytes
Identical to prefultils but retrieve preference files from pwd location in compiled mode.
ModelitUtilRoot\PublicFiles
15-Apr-2010 12:48:24
3572 bytes
rootpath - prepend exeroot for files that are specified without path in
disployed mode
CALL
fname=rootpath(fname)
INPUT
fname: file specified without a path
OUPUT
fname
If not deployed mode or fname was specified with path: input.fname
Otherwise: fname =fullfile(exeroot,input.fname)
ModelitUtilRoot\PublicFiles
20-Apr-2009 11:34:58
995 bytes
WIJZ ZIJPP 20100415 This file was modified to obtain a more robust behaviour in compiled mode Preference files are now stored in local directory (pwd) File is identical to "setpref" but all calls to "prefutils" are replaced by calls to "prefutilsModelit"
ModelitUtilRoot\PublicFiles
15-Apr-2010 12:18:15
2168 bytes
CrdCnv - convert coordinates
CALL
[crd_out,errorcode] = CrdCnv(crd_type_in,crd_in,crd_type_out)
INPUT
crd_type_in :<string> input coordinate type
crd_in : input coordinates (2 element vector)
crd_type_out:<string> output coordinate type
OUTPUT
crd_out: output coordinates (2 element vector)
errorcode: if error occurs, the errorcode from the C source is
returned
NOTE:
This function calls a mex file
To build:
mcc -x -d build CrdCnv crdcnv_external.c crdcnvmd.c
C prototype
CrdCnvMD ( char crd_type_in[4], long crd_in_1, long crd_in_2,
char crd_type_out[4], long *crd_out_1, long *crd_out_2,
long *error )
ModelitUtilRoot\RWSnat
16-Aug-2008 18:47:24
1734 bytes
ModelitUtilRoot\RWSnat
29-Oct-2004 15:35:50
36864 bytes
ComposeDiaList - Make a list of DIA structures that can be displayed in a
Java table.
CALL:
Contents = ComposeDiaList(dialist, fields)
INPUT:
dialist:
Struct array with Donar data blocks, see emptyblok for the format.
fields:
Cellstring, information to be displayed in table, possible values:
- 'Locatiecode','Locatie'
- 'Parameter'
- 'Veldapparaat'
- 'Analysecode'
- 'Tijdstap'
- 'Begindatum'
- 'Einddatum'
OUTPUT:
Contents:
Structure with fields:
- header: Cellstring with columnnames.
- data: Cell array with data.
See also: jacontrol
ModelitUtilRoot\diaroutines
29-Apr-2010 14:12:50
2987 bytes
bepaal_tijdstap - Determine timestep of given time axis.
CALL:
[tijdstapeenheid, tijdstap] = bepaal_tijdstap(taxis, mode)
INPUT:
taxis:
Vector of Matlab datenum.
mode:
(Optional) string with possible values:
'TE' (default) - Assume an equidistant timeseries, tijdstap is the
smallest found timestep, this is useful when the
timeseries has missing values.
otherwise - If timestep is always equal --> TE timeseries.
Otherwise --> TN timeseries.
OUTPUT:
tijdstapeenheid:
String with possible values, empty if TN timeseries:
- 'd' days
- 'min' minute
- 's' seconds
- 'cs' centiseconds
tijdstap:
Integer with timestep in tijdstapeenheid units, empty if TN timeseries.
See also: cmp_taxis, set_taxis
ModelitUtilRoot\diaroutines
16-Feb-2010 17:37:42
2132 bytes
CALL
rc=checkRKS(RKS)
INPUT
RKS
structure with RKS data
OUTPUT
rc
0 : ok
-20001 : lBegdat not ok
-20002 : lEnddat not ok
-20003 : iBegtyd not ok
-20004 : iEndtyd not ok
-20005 : end before start
ModelitUtilRoot\diaroutines
02-Aug-2010 18:10:27
1736 bytes
cmp_taxis - Compute time axis for Donar timeseries.
CALL:
taxis = cmp_taxis(s, N, SIGNIFIKANTIE)
INPUT:
s:
structure with the following relevant fields (Donar RKS block).
lBegdat: e.g. 19980101
iBegtyd: e.g. 1430
sTydehd: 'min'
iTydstp: 10
lEnddat: 19980228
iEndtyd: 2350
N:
(Optional) total number of datapoints for checking.
SIGNIFIKANTIE:
(Optional) timeaxis precision, default value: 1440(minutes),
if necessary specify second argument N eventueel as [].
OUTPUT
taxis:
Vector of Matlab datenum with the equidistant times.
APPROACH:
Converteer opgegeven begin- en eindtijdstip naar matlab datenum.
Bereken de stapgrootte.
Let op! bij coderen was complete lijst met mogelijkheden niet voorhanden!
alleen de eenheid 'min' is geverifieerd.
bouw het taxis array.
Wanneer ook een tweede argument beschikbaar is wordt het aantal tijdstappen
gecontroleerd.
In geval van een inconsitentie volgt een melding.
Het aantal opgegeven datapunten in N is dan maatgevend.
See also: select_interval
ModelitUtilRoot\diaroutines
17-May-2009 16:38:02
2705 bytes
combineRKS - Combine two or more RKS (Reeksadministratie) blocks.
CALL:
RKS = combineRKS(oldRKS, newRKS)
INPUT:
oldRKS:
Struct or struct array with one or more existing RKS blocks.
newRKS:
Struct or struct array with RKS block to be added.
OUTPUT:
RKS:
Structure with combined RKS blocks.
APPROACH:
The period is extended from first to last observation.
There are two different ways to call this function:
1. incremental: 1 RKS is added.
2. parallel: A struct array of RKS blocks is added.
See also: emptyRKS
ModelitUtilRoot\diaroutines
16-Nov-2009 21:24:14
2556 bytes
datenum2long - Convert Matlab datenum to date with format YYYYMMDD,
time with format HHmm and time with format HHmmSS.
CALL:
[Date, Time, LongTime] = datenum2long(D, timeunit)
INPUT:
D:
Scalar, vector or matrix with datenum data.
timeunit:
Opional argument with possible values:
- 'mnd': Donar uses different format for months.
- otherwise: Use standard Donar date format.
OUTPUT:
Date:
Corresponding date(s) in YYYYMMDD.
Time:
Corresponding time(s) in HHmm.
LongTime:
Corresponding time(s) in HHmmSS.
APPROACH:
Round D to whole minutes:
- Add 30 seconds to D and call datevec.
- Ignore 'second' output argument.
NOTE:
.m source of this funciton is used in mwritedia.c.
See also: long2datenum
ModelitUtilRoot\diaroutines
17-Aug-2008 14:59:28
2448 bytes
defaultdia - Fill dia with default values.
CALL:
S = defaultdia(S)
INPUT:
S:
DIA structure.
OUTPUT:
S:
DIA structure with default values.
APPROACH:
Check if IDT block is present.
If IDT present: fill IDT block with default values with subroutine defaultIDT.
If IDT not present: generate default IDT block with subroutine defaultIDT.
Check if one ore more series are present.
If series present: fill series with default data with subroutine DefaultData.
See also: dimspecs
ModelitUtilRoot\diaroutines
19-Aug-2008 07:02:26
19184 bytes
dia_merge - Merge two equidistant timeseries.
CALL:
[dia_new, missing, total] = dia_merge(dia_old, dia_new, SIGNIFIKANTIE, copyhiaat)
INPUT:
dia_old:
Structure with existing DIA.
dia_new:
Structure with DIA to be added (overwrite when necessary).
SIGNIFIKANTIE:
(Optional) integer with time axis precision, e.g. 1440 for minutes.
copyhiaat:
(Optional) True -> overwrite existing dia with missing values.
False -> do not overwrite existing dia with missing values.
OUTPUT:
dia_new:
Structure with merged timeseries.
missing:
Integer with total number of values which could not be filled in the
new time axis.
total:
Integer with total number of new datapoints in new taxis.
See also: mergeDias
ModelitUtilRoot\diaroutines
30-Nov-2009 12:16:00
3821 bytes
dimspecs - Read fieldnames as used in the Donar Interface Modules.
CALL:
[veld_empty, veld_0, veld_999, veld_99, veld_NVT, veld_NietVanToepassing] = dimspecs(blok)
INPUT:
blok:
Structure with Donar data blok, with supported fields: 'W3H','MUX',
'SGK','RGH','TYP','RKS','TPS','WRD'.
OUTPUT:
veld_empty:
fields with defaultvalue ''.
veld_0:
fields with defaultvalue 0.
veld_999:
fields with defaultvalue -99.
veld_99:
fields with defaultvalue -999999999.
veld_NVT:
fields with defaultvalue 'NVT'.
veld_NietVanToepassing:
fields with defaultvalue 'Niet Van Toepassing'.
APPROACH:
Define output as constants.
See also: defaultdia
ModelitUtilRoot\diaroutines
19-Aug-2008 07:09:52
4571 bytes
displayStations - Display the stations contained in a dia block in a
specified axis.
CALL:
displayStations(h_map, blok, labels, S)
INPUT:
h_map:
Handle of axis in which to plot the stations.
blok:
Struct array with location information, see emptyblok for format of a
dia block.
labels:
true -> plot station labels and location.
false -> plot station location only.
S:
Struct array with markup for stations, with fields:
- color: Colour triple [r g b].
- markerfacecolor: Colour triple [r g b].
- marker: String, see Matlab plot function.
- markersize: Integer.
- fontsize: Integer.
- legenda: String to be displayed in legend.
- linewidth: Integer indicating the width of the marker edge.
- callback: Function handle of function to call when the
station is clicked on.
- locatie: Char array with stationcodes(sLoccod),
use 'default' to specify default marker.
OUTPUT:
No direct output, the stations specified in the dia blocks are displayed
in the axis with the specified markers.
See also: emptyblok
ModelitUtilRoot\diaroutines
06-Apr-2009 08:37:32
3909 bytes
duration - Calculate duration of a timeunit in Matlab datenum units.
CALL:
d = duration(timeunit)
INPUT:
timeunit:
String with possible values:
- 'mnd' months;
- 'd' days;
- 'min' minutes;
- 'uur' hours;
- 'cs' centiseconds.
OUTPUT:
d:
Duration of the given timeunit in Matlab datenum units.
See also: cmp_taxis
ModelitUtilRoot\diaroutines
15-May-2009 11:15:40
897 bytes
emptyRKS - Make default RKS (Reeksadministratie) block.
CALL:
RKS = emptyRKS
INPUT:
No input required.
OUTPUT:
RKS:
Structure with fields:
+----sRefvlk (char)
+----lBemhgt (double)
+----lBegdat (double)
+----iBegtyd (double)
+----sSyscod (char)
+----sTydehd (char)
+----iTydstp (double)
+----lXcrdgs (double)
+----lYcrdgs (double)
+----lVakstp (double)
+----lEnddat (double)
+----iEndtyd (double)
+----sRkssta (char)
+----lBeginv (double)
+----lEndinv (double)
+----sVzmcod (char)
+----sVzmoms (char)
+----sSvzcod (char)
+----sSvzoms (char)
+----sSsvcod (char)
+----sSsvoms (char)
+----sSsscod (char)
+----sSssoms (char)
+----lXcrdwb (double)
+----lYcrdzb (double)
+----lXcrdob (double)
+----lYcrdnb (double)
+----lXcrdmn (double)
+----lYcrdmn (double)
+----lXcrdmx (double)
+----lYcrdmx (double)
See also: emptyblok
ModelitUtilRoot\diaroutines
18-Aug-2008 12:42:04
3518 bytes
emptyW3H - Make default W3H (W3H administratie) block.
CALL:
W3H = emptyW3H
INPUT:
No input required.
OUTPUT:
W3H:
Structure with fields:
+----sMuxcod (char)
+----sMuxoms (char)
+----lWnsnum (double)
+----sParcod (char)
+----sParoms (char)
+----sCasnum
+----sStaind (char)
+----nCpmcod (double)
+----sCpmoms (char)
+----sDomein (char)
+----sEhdcod (char)
+----sHdhcod (char)
+----sHdhoms (char)
+----sOrgcod (char)
+----sOrgoms (char)
+----sSgkcod (char)
+----sIvscod (char)
+----sIvsoms (char)
+----sBtccod (char)
+----sBtlcod (char)
+----sBtxoms (char)
+----sBtnnam (char)
+----sAnicod (char)
+----sAnioms (char)
+----sBhicod (char)
+----sBhioms (char)
+----sBmicod (char)
+----sBmioms (char)
+----sOgicod (char)
+----sOgioms (char)
+----sGbdcod (char)
+----sGbdoms (char)
+----sLoccod (char)
+----sLocoms (char)
+----sLocsrt (char)
+----sCrdtyp (char)
+----lXcrdgs (double)
+----lYcrdgs (double)
+----lGhoekg (double)
+----lRhoekg (double)
+----lMetrng (double)
+----lStraal (double)
+----lXcrdmp (double)
+----lYcrdmp (double)
+----sOmloop (char)
+----sAnacod (char)
+----sAnaoms (char)
+----sBemcod (char)
+----sBemoms (char)
+----sBewcod (char)
+----sBewoms (char)
+----sVatcod (char)
+----sVatoms (char)
+----sRkstyp (char)
See also: emptyblok
ModelitUtilRoot\diaroutines
16-Oct-2008 13:01:24
4607 bytes
emptyWRD - Make default WRD (Waarde) block.
CALL:
WRD = emptyWRD
INPUT:
No input required.
OUTPUT:
WRD:
Structure with fields:
+----taxis (double)
+----lKeynr2 (double)
+----Wrd (double)
+----nKwlcod (double)
See also: emptyblok
ModelitUtilRoot\diaroutines
18-Aug-2008 12:30:44
503 bytes
emptyblok - Make an empty Donar data block.
CALL:
blok = emptyblok
INPUT:
No input required.
OUTPUT:
blok:
Donar data block, with the following required partial data blocks:
- W3H
- RKS
- WRD (must contain at least one row of data)
optional partial data blocks:
- MUX
- TYP
- TPS
See also: readdia_R14, writedia_R14, emptyDia, emptyW3H, emptyWRD,
emptyMUX, emptyTPS
ModelitUtilRoot\diaroutines
18-Aug-2008 13:29:16
766 bytes
emptydia - Create an empty dia.
CALL:
S = emptydia(n)
INPUT:
n:
Number of blocks filled with default values, default value: 0.
OUTPUT:
S:
Dia Structure, with fields:
+----IDT
| +----sFiltyp (char)
| +----sSyscod (char)
| +----lCredat (double)
| +----sCmtrgl (char)
+----blok
+----W3H (struct): see emptyW3H
+----MUX (struct): empty, see emptyMUX
+----TYP (struct): empty
+----RGH (struct): empty, see emptyRGH
+----RKS (struct): see emptyRKS
+----TPS (struct): empty, see emptyTPS
+----WRD (struct): see emptyWRD
APPROACH:
This function inializes the structure with the correct fields. Besides
correct fields there are several other conditions a Dia structure must
satisfy.
EXAMPLE:
s=emptydia(1);
<CHANGE STRUCTURE s>
writedia_R14(s,'dia.dia');
See also: readdia_R14, writedia_R14, emptyblok, emptyW3H, emptyWRD, emptyMUX,
emptyTPS
ModelitUtilRoot\diaroutines
18-Aug-2008 13:48:52
1551 bytes
interp_blok - Interpolate Donar block to new time axis.
CALL:
blok = interp_blok(blok, taxis, mode)
INPUT:
blok:
Structure with Donar data block, see emptyblok for format.
taxis:
Vector of Matlab datenums.
mode:
String with possible values:
'all' - Estimate all points not in taxis AND missing
values.
other - Estimate only missing values.
OUTPUT:
blok:
Structure with Donar data block, see emptyblok for format.
See also: cmp_taxis, emptyblok
ModelitUtilRoot\diaroutines
24-Feb-2010 12:27:04
4367 bytes
long2datenum - Convert two Longs with date with format YYYYMMDD and time
with format HHmm to Matlab datenum format.
CALL:
taxis = long2datenum(taxisdatum, taxistime, timeunit)
INPUT:
taxisdate:
Vector of Long with format YYYYMMDD.
taxistime:
Vector of Long with format HHmm.
OUTPUT:
taxis:
Vector with corresponding values in Matlab datenum format.
APPROACH:
Using the Matlab 'rem' en 'round' operators Year, Month, Day, Hour and
minute are extracted, followed by a call to datenum to get the Matlab
datenum format of the specified dates.
See also: datenum2long
ModelitUtilRoot\diaroutines
17-Aug-2008 14:51:44
1262 bytes
matroos2dia - Retrieve and convert timeseries from the matroos database.
CALL:
[dia message] = matroos2dia(stuurfilename, metafilename, diafilename)
INPUT:
stuurfilename:
String with name of the file with timeseries to get from matroos.
metafilename:
String with name of the file with metainfo (DIA).
diafilename:
String with name of the file to which the DIA should be epxorted.
OUTPUT:
dia:
Structure, for format see emptydia, empty on error.
message:
String with message if error has occurred.
APPROACH:
Format of the settingsfile:
sLoccod sParcod sVatcod source loc unit
<string> <string> <string> <string> <string> <string>
HUIBGT WINDRTG FASTRCDR knmi_noos huibertgat wind_direction
tstart tstop
<string> <string>
200701010000 200702010000
In an extra file matroos2dia.opt the following:
url <string> with matroos url e.g. http://matroos2/direct/get_series.php?
proxyadres <string> e.g. proxy.minvenw.nl
proxypoort <integer> 80
verbose <boolean> 1
can be specified.
For source, loc and unit see http://matroos2/direct/get_series.php?
tstart and tstop have dateformat: YYYYMMDDHHmm
See also: emptydia, http://matroos2/direct/get_series.php?
ModelitUtilRoot\diaroutines
09-Feb-2009 11:25:40
10716 bytes
readdia_R14 - Read a DIA file to a Matlab structure.
CALL:
data = readdia_R14(fname)
INPUT:
fname:
String with the name of the DIA file to be read.
OUTPUT:
data:
Dia Structure (empty on error), with fields:
+----IDT
| +----sFiltyp (char)
| +----sSyscod (char)
| +----lCredat (double)
| +----sCmtrgl (char)
+----blok
+----W3H (struct): see emptyW3H
+----MUX (struct): empty, see emptyMUX
+----TYP (struct): empty
+----RGH (struct): empty, see emptyRGH
+----RKS (struct): see emptyRKS
+----TPS (struct): empty, see emptyTPS
+----WRD (struct): see emptyWRD
See also: writedia_R14
ModelitUtilRoot\diaroutines
18-Aug-2008 12:10:58
1394 bytes
ModelitUtilRoot\diaroutines
14-Aug-2008 10:06:50
118784 bytes
set_taxis - Make RKS or TPS block by specifying begintime, endtime,
timeunit and timestep.
CALL:
S = set_taxis(S, tbegin, teind, tijdstapeenheid, tijdstap)
INPUT:
S:
Existing RKS or TPS administrationbuffer, may be empty.
tbegin:
Datenum with begin time.
teind:
Datenum with end time.
tijdstapeenheid:
(Optional) String with timeunit, see DONAR Manual Part 7, section 2.9.3
tijdstap:
(Optional) timestep in tijdstapeenheid units.
OUTPUT:
S:
Structure with RKS or TPS (reeksadministratiebuffer) with new values.
APPROACH:
Convert Matlab datenum to DONAR date and time.
Substitutue values. Check if timeunit en timestep need to be added.
Check if timeunit and timestep are valid
EXAMPLE:
blok(k).RKS=set_taxis(blok(k).RKS,min(taxis_totaal),max(taxis_totaal));
blok(k).TPS=set_taxis(blok(k).TPS,min(taxis_totaal),max(taxis_totaal));
See also: combineRKS, combineTPS, cmp_taxis
ModelitUtilRoot\diaroutines
18-Aug-2008 17:38:26
2005 bytes
splitlongdate - Split one or more dates of the form YYYYMMDDHHMM into two
numbers date: YYYYMMDD and time: HHMM.
CALL:
[datum, time] = splitlongdate(longdate)
INPUT:
longdate:
Vector of integers of format YYYYMMDDHHMM.
OUTPUT:
datum:
Vector of integers of format YYYYMMDD.
time:
Vector of integers of format HHMM.
EXAMPLE:
[datum, time] = splitlongdate(200808141200)
See also: long2datenum, datenum, datestr, datenum2long
ModelitUtilRoot\diaroutines
09-Feb-2009 11:25:18
632 bytes
writedia - Write DIA structure to file.
CALL:
rc = writedia_R14(S, fname)
INPUT:
S:
Dia structure to save, fields:
Dia Structure, with fields:
+----IDT
| +----sFiltyp (char)
| +----sSyscod (char)
| +----lCredat (double)
| +----sCmtrgl (char)
+----blok
+----W3H (struct): see emptyW3H
+----MUX (struct): empty, see emptyMUX
+----TYP (struct): empty
+----RGH (struct): empty, see emptyRGH
+----RKS (struct): see emptyRKS
+----TPS (struct): empty, see emptyTPS
+----WRD (struct): see emptyWRD
fname:
String with the name of the file to create.
OUTPUT:
rc:
Integer returncode:
rc == 0 operation successful.
rc ~= 0 error, rc contains the DIM errorcode.
See also: readdia_R14 verifyDia
ModelitUtilRoot\diaroutines
02-Aug-2010 17:52:17
4278 bytes
ModelitUtilRoot\diaroutines
14-Aug-2008 10:06:34
98304 bytes
show - show image file
CALL
image: filename (with or without extension)
Modelit
www.modelit.nl
ModelitUtilRoot\docutool
30-Apr-2003 18:56:03
450 bytes
expandAll - expand or collapse entire tree
CALL:
expandAll(jac, expand)
INPUT:
jac: <jacontrol object> van het type JTree, TreeTable of JXTable
expand: <boolean>
1: completely expand tree
0: collapse tree
OUTPUT:
no direct output, tree is fully collapsed or expanded
ModelitUtilRoot\jacontrol
22-Mar-2010 10:26:56
2514 bytes
findNode - CALL: treePath = findNode(tree,names) INPUT: tree: <java object> javax.swing.JTree names: <cellstring> met de namen van de op te zoeken knopen in de boom OUTPUT: treePath
ModelitUtilRoot\jacontrol
24-Oct-2005 20:51:58
2293 bytes
getTableValue - this function can be used to retrieve the column and row
in the original datamodel for which a edit action took
place. this function is typically used in a callback.
CALL:
[value, row, col, colname, index] = getTableValue(obj, event)
INPUT:
obj: <nl.modelit.mdlttable.mdltTable object> the table on which the
event took place. (argument of the datacallback of a table)
event: <nl.modelit.mdlttable.event.TableChangedEvent> description of
the event giving information about the row and column of the
table in which the cell was editied (starting from row == 1
and column == 1) (argument of the datacallback of a table)
OUTPUT:
value: the value of the tablecell which was edited.
row: <integer> row of changed cell in the original datamodel
counting from 1.
col: <integer> column of changed cell in the original datamodel
counting from 1.
colname: <string> with columnname
ModelitUtilRoot\jacontrol
03-May-2010 20:44:42
1568 bytes
isopen - return true if user has double clicked
CALL
ok=isopen
ok=isopen(HWIN)
ok=isopen(event)
INPUT
HWIN: window handle
event: event from jxtable
OUTPUT
ok: TRUE if user has doubleclikked
NOTE
this function is needed to evbaluate doubleclick status in tabels
because "selectiontype" property does not return correct values in
these cases. This function also works if called outside of table.
ModelitUtilRoot\jacontrol
30-Mar-2009 15:49:18
1271 bytes
jatypes - list all acceptable values for the "style" property of a
jacontrol object
CALL:
flds = jatypes
INPUT:
no input
OUTPUT:
flds: <cellstring> acceptable styles for a jacontrol object
See also: jacontrol
ModelitUtilRoot\jacontrol
18-May-2010 15:06:02
760 bytes
matlab2javadateformat - convert string with matlab dateformat to a java
dateformat, mainly to use with tables
CALL:
format = matlab2javadateformat(format)
INPUT:
format: string with matlab dateformat
OUTPUT:
format: string with java dateformat
APPROACH:
Matlab format:
yyyy full year, e.g. 1990, 2000, 2002
yy partial year, e.g. 90, 00, 02
mmmm full name of the month, according to the calendar locale, e.g.
"March", "April" in the UK and USA English locales.
mmm first three letters of the month, according to the calendar
locale, e.g. "Mar", "Apr" in the UK and USA English locales.
mm numeric month of year, padded with leading zeros, e.g. ../03/..
or ../12/..
m capitalized first letter of the month, according to the
calendar locale; for backwards compatibility.
dddd full name of the weekday, according to the calendar locale, e.g.
"Monday", "Tuesday", for the UK and USA calendar locales.
ddd first three letters of the weekday, according to the calendar
locale, e.g. "Mon", "Tue", for the UK and USA calendar locales.
dd numeric day of the month, padded with leading zeros, e.g.
05/../.. or 20/../..
d capitalized first letter of the weekday; for backwards
compatibility
HH hour of the day, according to the time format. In case the time
format AM | PM is set, HH does not pad with leading zeros. In
case AM | PM is not set, display the hour of the day, padded
with leading zeros. e.g 10:20 PM, which is equivalent to 22:20;
9:00 AM, which is equivalent to 09:00.
MM minutes of the hour, padded with leading zeros, e.g. 10:15,
10:05, 10:05 AM.
SS second of the minute, padded with leading zeros, e.g. 10:15:30,
10:05:30, 10:05:30 AM.
FFF milliseconds field, padded with leading zeros, e.g.
10:15:30.015.
PM set the time format as time of morning or time of afternoon. AM
or PM is appended to the date string, as appropriate.
Java format
G Era designator Text AD
y Year Year 1996; 96
M Month in year Month July; Jul; 07
w Week in year Number 27
W Week in month Number 2
D Day in year Number 189
d Day in month Number 10
F Day of week in month Number 2
E Day in week Text Tuesday; Tue
a Am/pm marker Text PM
H Hour in day (0-23) Number 0
k Hour in day (1-24) Number 24
K Hour in am/pm (0-11) Number 0
h Hour in am/pm (1-12) Number 12
m Minute in hour Number 30
s Second in minute Number 55
S Millisecond Number 978
z Time zone General time zone Pacific Standard Time; PST; GMT-08:00
Z Time zone RFC 822 time zone -0800
Conversion table:
y -> y
m -> M
M -> m
d -> d
H -> H
S -> s
F -> S
PM -> a
ModelitUtilRoot\jacontrol
01-May-2009 14:05:34
3412 bytes
node2treepath - CALL: treepath = node2treepath(node) INPUT: node: OUTPUT: treepath
ModelitUtilRoot\jacontrol
24-Oct-2005 20:33:30
553 bytes
tableWindow - maak scherm met tabel aan en vul deze met tableComposer
CALL:
table = tableWindow(tableComposer)
INPUT:
tableComposer: <function handle> dit is een function met uitvoer een
struct om de lijst mee te vullen, heeft de volgende vorm:
'data' cell array met waarden
'header' cellstr met de namen van de kolommen
<table struct> af te beelden tabel, zie ook istable
args: <cell array> met argumenten voor tableComposer
OUTPUT:
HWIN: handle van aangemaakt figuur
table: jacontrol van het type jxtable
See also: jacontrol
ModelitUtilRoot\jacontrol
24-Dec-2009 13:11:18
4857 bytes
display - display method for the jacontrol object CALL: display(obj) INPUT: obj: object of type jacontrol OUTPUT: no direct output, information about the jacontrol object is plotted on the console See also: jacontrol
ModelitUtilRoot\jacontrol\@jacontrol
07-May-2008 16:05:28
392 bytes
get - get methode van het jacontrol object CALL: prop_value = get(obj, prop_name) INPUT: obj: jacontrol object prop_name: string met op te vragen property OUTPUT: prop_value: waarde van de property See also: jacontrol/set, jacontrol
ModelitUtilRoot\jacontrol\@jacontrol
12-Jun-2010 09:49:38
60960 bytes
getTableValue - haal de waarde voor een geedit veld op van een sorttable
CALL:
[value,row,col] = getTableValue(obj,event)
INPUT:
obj: <nl.modelit.jacontrol.table.mdltTable object> de sorttable waar het
event heeft plaatsgevonden
event: <nl.modelit.jacontrol.table.event.TableChangedEvent> beschrijving van
het event met onder andere de rij en kolom waar het event
plaatsvond (beginnend van row == 1 en col == 1)
OUTPUT:
value: de waarde van het veld dat geedit was
row: <integer> rij in het achterliggende datamodel van de tabel,
tellend vanaf 1
col: <integer> kolom in het achterliggende datamodel van de tabel,
tellend vanaf 1
ModelitUtilRoot\jacontrol\@jacontrol
30-Jun-2006 12:04:50
1020 bytes
overload setappdata for jacontrol objects INPUT
ModelitUtilRoot\jacontrol\@jacontrol
19-Mar-2008 11:39:32
226 bytes
help - overloaded help for jacontrol objects CALL: str = help(obj, prop_name) INPUT: obj: jacontrol object prop_name: string with object's fieldname for which help is needed OUTPUT: str: string with help See also: jacontrol, jacontrol/set, jacontrol/private/jafields
ModelitUtilRoot\jacontrol\@jacontrol
07-May-2010 17:14:28
11203 bytes
hideColumn - hide a column in a sorttable CALL: hideColumn(sorttable, columnname, hide) INPUT: jac: <jacontrol> type jxtable columnname: <string> with name of colums to hide hide: <boolean> OUTPUT: no output, the specified column is hidden See also: jacontrol
ModelitUtilRoot\jacontrol\@jacontrol
23-May-2007 17:59:20
879 bytes
inspect - show table with the object's property value pairs CALL: inspect(obj) INPUT: obj: jacontrol object OUTPUT: no output, a window with a table with property value pairs is displayed See also: jacontrol, tableWindow
ModelitUtilRoot\jacontrol\@jacontrol
26-Mar-2008 10:09:02
3649 bytes
overload setappdata for jacontrol objects INPUT
ModelitUtilRoot\jacontrol\@jacontrol
07-Oct-2004 21:49:27
166 bytes
ishandle - ishandle implementation for jacontrol objects
CALL
rc=ishandle(obj)
INPUT
obj
jacontrol object
OUTPUT
rc
true is uicontainer that is of jacontrol object still is valid
handle
ModelitUtilRoot\jacontrol\@jacontrol
20-Aug-2008 22:43:12
390 bytes
jacontrol - create a jacontrol object and set user defined fixed
properties
CALL:
[obj, hcontainer] = jacontrol(hParent,propertyName,propertyValue,...)
[obj, hcontainer] = jacontrol(propertyName,propertyValue,...)
INPUT:
hParent: usually: cuurent figure
varargin: property-value pairs, see jacontrol/set for valid properties
OUTPUT:
obj: the jacontrol object
hcontainer: the returned container from the call to javacomponent
EXAMPLES:
%jacontrol object work together with mbdarrange
[spinner,h] = jacontrol('style','jspin',...
'tag','minframeh',...
'steps',20,...
'toolt',toolt);
set(spinner,'callb',{@set_setting2_0,spinner,'','minframeh'});
setappdata(spinner,'opt',struct('type','int','minimum',200,...
'maximum',1000,'required',1,...
'minstr',toolt,...
'maxstr',toolt));
mbdlinkobj(h,h_inner);
mbdarrange(h_inner,'VMARGE',1,'HMARGE',5);
%jacontrol works together with mbdparse
%jacontrol works together with gcjh:
[spinner,h] = jacontrol('style','jspin',...
'tag','minframeh',...
'steps',20,...
'toolt',toolt);
h_jacontrol = gcjh('minframeh')
See also: gcjh, javacomponent
ModelitUtilRoot\jacontrol\@jacontrol
18-Jun-2010 16:41:32
18087 bytes
set - set methode van het jacontrol object CALL: obj = set(obj, varargin) INPUT: obj: jacontrol object varargin: <option>, <argument> paren OUTPUT: obj: jacontrol object See also: jacontrol/get, jacontrol
ModelitUtilRoot\jacontrol\@jacontrol
19-Aug-2010 11:09:05
186987 bytes
setPieceBarColors - set colors for PieceBarRenderer in sorttable CALL: hideColumn(sorttable, columnname, hide) INPUT: jac: <jacontrol> type jxtable key: <double> key value for corresponding color, size is Nx1 colors: <boolean> color corresponding to key, size is Nx3 OUTPUT: no output, the colors for the PieceBarCellRenderer are set See also: jacontrol
ModelitUtilRoot\jacontrol\@jacontrol
28-Aug-2007 14:12:50
971 bytes
setValue - zet waarde in een sorteerbare tabel, deze nieuwe waarde moet
wel van hetzelfde type zijn als de oude waarde
CALL:
setValue(jac,arg,row,col)
INPUT:
jac: <object> van het type jacontrol met style SortTable
arg: de waarde die in de tabel gezet moet worden
row: <integer> rij waarin de waarde gezet moet worden
col: <integer> kolom waarin de waarde gezet moet worden
OUTPUT:
geen directe uitvoer, de waarde is in de tabel gezet in de opgegeven rij
en kolom (rij en kolom beginnen bij 1)
ModelitUtilRoot\jacontrol\@jacontrol
12-Jan-2007 23:29:36
1115 bytes
overload setappdata for jacontrol objects INPUT
ModelitUtilRoot\jacontrol\@jacontrol
19-Mar-2008 11:39:34
229 bytes
tableFormat - converteert jacontrol naar een structure aan met een data
veld met een cell array en een header veld met de
kolomnamen in een cellstring, dit formaat kan gebruikt
worden om het component te visualiseren in een jxtable
CALL:
S = tableFormat(obj)
INPUT:
obj: jacontrol object
OUTPUT:
S: structure met velden 'header' - cellstring met kolomnamen
'data' - cellarray met data
ModelitUtilRoot\jacontrol\@jacontrol
20-Jan-2008 11:51:28
765 bytes
test - hulpfunctie voor jacontrol testroutines
ModelitUtilRoot\jacontrol\@jacontrol
11-Apr-2009 12:32:22
4353 bytes
Return all feasible propery names for getproperty Call
ModelitUtilRoot\jacontrol\@jacontrol\private
20-Jul-2009 19:28:46
271 bytes
helpjacontrol - scan jacontrol/set voor help, resultaten worden
weggeschreven in een .mat bestand
CALL:
S = helpjacontrol
INPUT:
action: (optional) string if nargin == 1 the save .mat file is updated
OUTPUT:
S: struct met property help voor elke jacontrol stijl
See also: jacontrol
ModelitUtilRoot\jacontrol\@jacontrol\private
26-Mar-2008 13:01:12
3822 bytes
Enumerate fields of handle graphics object that holds jacontrol Note: UserData has been omitted from this list and is bypassed through objfields Note: BackgroundColor is moved to jafields
ModelitUtilRoot\jacontrol\@jacontrol\private
20-Jul-2009 18:57:19
547 bytes
IM2JAVA Convert image to Java image for RGB values with transparancy
JIMAGE = IM2JAVA(RGB) converts the RGB image RGB to an instance of
the Java image class, java.awt.Image.
Input-output specs
------------------
RGB: 3-D, real, full matrix
size(RGB,3)==3
double (NaN values will be interpreted as transparant)
logical ok but ignored
JIMAGE: java.awt.Image
ModelitUtilRoot\jacontrol\@jacontrol\private
10-Oct-2004 00:54:04
1182 bytes
inspect - show a window with a treetable with jacontrol types and fields
CALL:
jacontroltree
INPUT:
no input
OUTPUT:
no output, a window with a treetable with jacontrol types and fields
is displayed
See also: jacontrol, tableWindow
ModelitUtilRoot\jacontrol\@jacontrol\private
21-Apr-2009 13:37:38
4585 bytes
jafields - Enumerate fieldnames for which a translation to java objects
exists
CALL:
flds = jafields(style)
INPUT:
style: string met jacontrol stijl
OUTPUT:
flds: cellstring met properties van betreffende jacontrol stijl
See also: jacontrol, jacontrol/get, jacontrol/set, jacontrol/help,
jacontrol/private/jatypes
ModelitUtilRoot\jacontrol\@jacontrol\private
12-Jun-2010 09:49:56
13080 bytes
Enumerate fields for jacontrol object
ModelitUtilRoot\jacontrol\@jacontrol\private
20-Jul-2009 19:25:06
382 bytes
evaldepend - evaluate update structure for combination of object and figure
SUMMARY
This function evaluates the update structure for the combination of:
- 1 or more undoredo objects registered with setdepend
- a figure
Prior to calling this function the dependency tree must be specified
using the setdepend command.
CALL:
upd = evaldepend(HWIN, ind, signature)
upd = evaldepend(HWIN, ind)
INPUT:
HWIN: figure handle
ind: subscripts applied to modify object data
signature: signature of modified undoredo object
OUTPUT:
upd: <struct> that contains the screen elements that should or
should not be updated
upd.property=0 ==> do not update screen element
upd.property=1 ==> updates screen element
EXAMPLE
The code below provides a template for usage of dependency trees
In the present example 2 undoredo objects are used. Typically this is
needed when the application depends on:
-1- workspace data
-2- user preferences
%Include in the main body of the application:
db=undoredo(initdb,'disp',@dispdata);
setdepend(HWIN, db, data2applic);
opt=undoredo(initopt,'disp',@dispsettings);
setdepend(HWIN, opt, settings2applic);
function s=initdb
-user definded function-
function s=initopt
-user definded function-
function db=get_db
-user definded function-
function opt=get_opt;
-user definded function-
function dispdata(signature,db,ind)
upd = evaldepend(HWIN, ind, signature)
opt=get_opt;
view(db,opt,upd);
function dispsettings(signature,opt,ind)
upd = evaldepend(HWIN, ind, signature)
db=get_db;
view(db,opt,upd);
function view(db,opt,upd)
-user definded function-
if upd.element1
-user definded action-
end
if upd.element2
-user definded action-
end
See also: setdepend, mdlt_dependencies
ModelitUtilRoot\matlabguru
28-Jun-2010 16:50:47
4083 bytes
getdepend - retrieve dependency tree for combination of object and figure
CALL:
deptree = getdepend(HWIN, obj)
or
deptree = getdepend(HWIN, signature)
INPUT:
HWIN: figure handle
argument 2:
obj: undoredo object (in this case the overloaded version of getdepend
will be called)
or
signature: signature value (double)
OUTPUT:
deptree: dependency tree that has been registered for this combination
of object and figure, see also setdepend.
NOTE:
This function exist also as an overloaded function
See also: setdepend, evaldepend
ModelitUtilRoot\matlabguru
28-Aug-2009 08:39:50
1587 bytes
retrieve - retrieve undoredo object using specified name "dbname"
CALL:
db = retrieve(HWIN, dbname)
db = retrieve(HWIN)
db = retrieve
INPUT:
HWIN: handle
dbname: database name. Typically "db" or "opt"
EXAMPLES:
db=undoredo(initdb,'dbname',db)
....
db=retrieve('db')
See also: unodoredo
ModelitUtilRoot\matlabguru
20-Apr-2009 10:52:18
1030 bytes
store - replacement for undoreo/store.m that will be called if flush
returns empty set in common ststements line "store(flush(db)).
CALL
store(obj)
INPUT
obj : any object
See also:
undoredo/flush
ModelitUtilRoot\matlabguru
20-Mar-2008 12:34:48
282 bytes
undomenu - execute undo, redo of undo-menu
CALL
undomenu(obj,event,operation,fp_getdata,HWIN)
undomenu(obj,event,operation,'opt',HWIN)
NOTE: this is not a method of undoredo function, but a generally
accesible function
INPUT
obj,event: standaard Matlab callback arguments
operation
operation==1 ==> undo
operation==2 ==> redo
operation==3 ==> multiple undo/redo
als operation==3 verschijnt popuplijst waarin
gebruiker keuze aangeeft
operation==4 ==> reset undo/redo history
fp_getdata:
-1- function pointer to user-specified function that returns database structure.
This can be a 3 line function like:
function ud=getdata
global MAINWIN %handle of application's main window
ud=get(MAINWIN,'userdata')
-2- CHAR string conting "opt" or "db"
HWIN: input argument for fp_getdata (usually figure handle)
USER INPUT
selectie in undolist scherm
OUTPUT NAAR SCHERM
geen
APPROACH
operation==1 ==> undo
Dit gebeurt met object method UNDO
operation==2 ==> redo
Dit gebeurt met object method REDO
operation==3 ==> multiple undo/redo
Dit gebeurt met object methods UNDO en REDO
ModelitUtilRoot\matlabguru
01-Dec-2009 00:34:36
2700 bytes
arglist - auxiliary function for undoredo/subref that implements cat
method
SUMMARY
This method is needed for undoredo object to be able to respond to
syntax stsr=strvcat(db.arry.fld)
CALL
obj = arglist(data)
INPUT
data
any Matlab variable
OUTPUT
obj
arglist object that encapsulates data
ModelitUtilRoot\matlabguru\@arglist
17-Aug-2008 15:15:26
488 bytes
cat - concatinate data stored in arglist object
CALL
data = cat(dim, obj)
INPUT
dim: 1 or 2, dimension for which concatenation is required
obj: arglist object
OUTPUT
data = cat(dim, obj)
ModelitUtilRoot\matlabguru\@arglist
09-May-2009 14:07:06
620 bytes
ModelitUtilRoot\matlabguru\@arglist
12-Jun-2007 14:13:32
29 bytes
ModelitUtilRoot\matlabguru\@arglist
12-Jun-2007 14:22:38
37 bytes
ModelitUtilRoot\matlabguru\@arglist
12-Jun-2007 14:14:10
28 bytes
applymenu - execute undo, redo of undo-menu
CALL
applymenu(obj,operation)
OR
undomenu(obj,event,operation,fp_getdata)
INPUT
operation
operation==1 ==> undo
operation==2 ==> redo
operation==3 ==> multiple undo/redo
als operation==3 verschijnt popuplijst waarin
gebruiker keuze aangeeft
operation==4 ==> reset undo/redo history
USER INPUT
selectie in undolist scherm
OUTPUT NAAR SCHERM
geen
APPROACH
operation==1 ==> undo
Dit gebeurt met object method UNDO
operation==2 ==> redo
Dit gebeurt met object method REDO
operation==3 ==> multiple undo/redo
Dit gebeurt met object methods UNDO en REDO
ModelitUtilRoot\matlabguru\@undoredo
15-Aug-2008 15:46:05
2995 bytes
delete all cache- and autosave files that belong to undoredo object
SUMMARY
Depending on the properties specied upon object creation, different
files may be associated with an undoredo object, such as cache files
and backup files.
cleanupdisk removes these files and should be called when the
undoredo object is no longer needed.
CALL
cleanupdisk(obj)
INPUT
obj: undoredo object
OUTPUT
This function return no output arguments
EXAMPLE
%Insert some where in main body:
set(gcf,'deletef',@deleteFcn)
function deleteFcn (obj,event)
% application delete function
%retrieve undoredo object:
db = get_db; %(get_db must be provided)
%remove all filles associated with undoredo object:
cleanupdisk(db);
%Destroy figure:
delete(obj)
ModelitUtilRoot\matlabguru\@undoredo
20-Apr-2009 11:34:51
1044 bytes
closegroup - close a group of transactions.
SUMMARY
In the undo redo menu, all transactions in a group are presented as
one line. The closegroup are used to separate different groups of
transactions. Normally the closegroup command is not needed as the
undoredo/store command closes a group of transactions before storing
the undoredo object.
closegroup is needed in the specific case where you are performing a
series of operations that should appear seperately in the undo list,
but there is no reason to the store the database in between.
CALL
db=closegroup(db)
INPUT
db: undoredo object
OUTPUT
db: undoredo object after update
See also: store
ModelitUtilRoot\matlabguru\@undoredo
08-Aug-2008 11:20:06
1327 bytes
deletequeue - make display queue empty without calling display function
SUMMARY
The undoredo object keeps track of the items that should be upodated
by the displayfunction bu storing substruct arguments passed on to
the subsasgn method in a cell array.
When the flush method is called this queue is passed on the display
function and the queue is made empty.
If you are working on an application that uses two undoredo objects
that can be modified independentlty, for example 1 for data and 1 for
user preferences, situations might occur where:
- both undoredo objects have a nonempty queue
- calling flush for 1 of the undoredo objects makes calling
the flush method for the other object no longer needed
In this case you may invoke deletequeue to tell the other object
that it can empty its queue. If you omit this, no real harm will be
done, but the next time flush is called for this object. Some
object will be repainted, causing un undesired user experience.
CALL
db=deletequeue(db)
INPUT/OUTPUT
db: undoredo object
EXAMPLE
to prevent calling display function twice, make queue empty
Typical use:
change settings (ur_assign,DRAWNOW,0)
delete queue
change data (ur_assign, DRAWNOW,1) <==this one calls display function
SEE ALSO
flush
ModelitUtilRoot\matlabguru\@undoredo
08-Aug-2008 15:16:02
1547 bytes
display - overloaded function for disp
SUMMARY
The undoredo object is designed so that the analogies with a "normal"
Matlab variable are maximized. When the function disp is invoked on
an undoredo object, disp(db.data) will be called. A line "undoredo
object" is displayed to notify the user of the calls of the object.
CALL
disp(db)
INPUT
db: undoredo object
OUTPUT
this function returns no output arguments
ModelitUtilRoot\matlabguru\@undoredo
08-Aug-2008 15:16:08
560 bytes
fieldnames - determine the fields of the undoredo-object that can be
changed by the user
CALL:
fields = fieldnames(obj)
INPUT:
obj: <undoredo-object>
OUTPUT:
fields: <cellstring> with the fields of the undoredo object
APPROACH:
this function is also important for autocomplete in the command window
SEE ALSO: undoredo, fieldnames
ModelitUtilRoot\matlabguru\@undoredo
19-Sep-2006 21:29:50
468 bytes
Perform all paint actions that are requered for the transactions since
last flush
CALL
obj=flush(obj)
obj=flush(obj,'all')
obj=flush(obj,extra)
INPUT
obj: undoredo object
extra: extra item to be passed on in cell-array 'queued'
typical use:
obj=flush(obj,'all'): update all elements
Note that the displayfunction that is used should be able to deal with
this extra argument
OUTPUT
obj: updated version of obj (the paint queue will be empty)
EXAMPLE
% paint all screen elements with changes in underlying data:
flush(obj)
% paint all screen elements:
flush(obj,'all')
%mimic change of specific field without actually changing data:
flush(guiopt,substruct('.','showodpair'));
TIPS AND TRICKS
The next code fragment shows how complex argument checking can be
implemented. If a user enters a number of arguments that are
mutually inconsistent. The user should be warned and the previous GUI
state must be restored.
function someCallback
db=get_db;
db=processUserInput;
db=flush(db)
if isempty(db)
%repaint interface based on previous data
warndlg(<somewarning>);
db=get_db;
db=flush(db,'all'); %you might want to refine here
end
store(db)
See also
store
deletequeue
isemptyqueue
ModelitUtilRoot\matlabguru\@undoredo
28-May-2010 13:41:14
3267 bytes
getdata - Retreive data content from undoredo object
SUMMARY
In most cases data is retrieved from an object by subscripting, for
example:
obj=undoredo(1:8)
a=obj(3)
==> a=3
However there is no subscript that retrieves the complete
datastructure. For this purpose the use the data() operator:
obj=undoredo(data1)
data2=data(obj)
==> data2 is an exact copy of data1
CALL
data=getdata(obj)
INPUT
obj: undoredo object
OUTPUT
data: data content of undoredo object
EXAMPLE
If OBJ is an undoredo object, the following example shows how the
clear the undoredo history of an object:
OBJ=undoredo(data(OBJ));
NOTE
There is a subtle difference between data=getdata(db) and data=db(:).
The (:) operator always returns a Mx1 vector with M=numel(db). If db
contains a data with size m x n, getdata is needed to retrieve data
content and data size.
SEE ALSO
undoredo/size
ModelitUtilRoot\matlabguru\@undoredo
20-Apr-2009 11:34:51
1124 bytes
getdepend - retrieve dependency tree for combination of object and figure
(overloaded method)
CALL
setdepend(HWIN,obj)
INPUT
HWIN: figure handle
obj: undoredo object
OUTPUT
deptree: dependency tree that has been registerd for this combination
of object and figure, see also setdepend
EXAMPLE
The code below provides a template for usage of dependency trees
%Include in the main body of the application:
db=undoredo(initdb,'disp',@dispdata);
setdepend(HWIN, db, data2applic);
function s=initdb
-user definded function-
function db=get_db
-user definded function-
function dispdata(signature,db,ind)
upd = getdepend(HWIN, db)
if upd.element1
-user definded action-
end
if upd.element2
-user definded action-
end
ModelitUtilRoot\matlabguru\@undoredo
20-Apr-2009 11:34:52
1170 bytes
getprop: return property of undoredo object
SUMMARY:
This methods implements a "get" method for fields of an undoredo that
normally are not visible. The function is intentinally not
documented.
CALL
prop_value=get(obj,prop_name)
INPUT
obj: undoredo object
prop_name: property to retrieve (incomplete string accepted)
OUTPUT
prop_value: property
See also: setprop
EXAMPLE
Q=getprop(db,'que')
db.prop=value
db=setprop(db,'que',Q); %prevent update
store(db);
NOTE
this function has name getprop, so Matlab set need not be overloaded
(this saves approx 0.001 sec per call)
ModelitUtilRoot\matlabguru\@undoredo
20-Apr-2009 11:34:52
1024 bytes
getsignature - retrieve data content from undoredo object CALL: signature = getsignature(obj) INPUT: obj: <undoredo object> OUTPUT: signature: <double> with object's signature
ModelitUtilRoot\matlabguru\@undoredo
13-Dec-2005 23:51:28
285 bytes
retrieve name of settings file
CALL
str = getsttname(obj)
INPUT
obj: undoredo object
See also : sttsave
ModelitUtilRoot\matlabguru\@undoredo
05-May-2009 11:22:24
348 bytes
iscommitted - return status of object
SUMMARY
When an undoredo object is created or its contents ar reassigned
using setdata, the status "comitted" is set to TRUE. Each command
that changes the data content also sets the status "comitted" to
false. The status "comitted" is used in the application program to
decide if the user should be ask to save data when the application is
closed. This is typically implemented in the closerequest function.
When the user saves intermediate results the application should
include a statement that sets the comitted status to TRUE.
CALL
committed=iscommitted(obj)
INPUT
obj: undoredo object
OUTPUT
comitted: Comitted status (TRUE or FALSE)
Comitted status is TRUE ==> all transactions have been
comitted (saved to disk of
stored otherwise)
Comitted status is FALSE ==> one or more transaactions
have not been comitted
EXAMPLE
(code example save data)
ud = getdata(db);
save(fname,'ud'); %WIJZ ZIJPP OKT 2006
db=setcommitted(db);
store(db);
(code example close request function)
function closereq(hFig,event)
db = get_db;
if isempty(db)||iscomitted(db)
delete(hFig);
return
end
%Ask and store unsaved data
switch questdlg('Save data?,'Close application','Yes','No','Cancel','Yes')
case 'Yes'
savedata(db);
delete(hFig);
case 'No'
delete(hFig);
return
case 'Cancel'
return;
end
See also
undoredo/setcommitted
undoredo/subsasgn
undoredo/mbdvalue
undoredo/isopen
ModelitUtilRoot\matlabguru\@undoredo
17-Aug-2008 15:21:03
2145 bytes
Overloaded method for isempty within class undoredo
CALL/INPUT/OUPUT
type "help isempty" for help on this topic
ModelitUtilRoot\matlabguru\@undoredo
17-Aug-2008 16:03:36
182 bytes
Return true is visualization queue of object is empty
CALL
rc=isemptyqueue(obj)
INPUT
obj: undoredo object
OUTPUT
rc: true if nothing left to paint
EXAMPLE
% callback of OK button: callback of edits have changed database, but
% changes are not yet fully shown because db has not been flushed
if isemptyqueue(obj)
return
end
store(flush(obj))
See also
store
flush
deletequeue
ModelitUtilRoot\matlabguru\@undoredo
11-Apr-2008 00:11:29
556 bytes
Overloaded method for isfield within class undoredo
CALL/INPUT/OUPUT
type "help isfield" for help on this topic
ModelitUtilRoot\matlabguru\@undoredo
17-Aug-2008 16:03:57
203 bytes
return - group status of undoredo object
CALL
isopen=isopen(obj)
INPUT
obj: undoredo object
OUTPUT
isopen: status of group
isopen = 1 ==> the group is open. a next ur_assign statement
would add to this group
(newgroup=0)
isopen = 0 ==> the group is closed. a next ur_assign
statement initializes a new group
(newgroup=1)
SEE ALSO:
ur_assign
mbdflush
ur_label
REMARK
mbdsubsasgn needs a newgroup argument. When mixing ur_assign and
mbdsubsasgn statements use newgroup=~isopen(db) or equivalent:
newgroup=db.nextisnew
ModelitUtilRoot\matlabguru\@undoredo
28-Jan-2005 13:50:37
1013 bytes
THIS FUNCTION IS OBSOLETE. USE SETLABEL
ModelitUtilRoot\matlabguru\@undoredo
09-Aug-2008 14:23:57
1386 bytes
logbookentry- update transaction log (add entry)
CALL:
db = logbookentry(db,content,type,undolabel,comment)
INPUT:
db : undoredo object (to be updated)
content : value for content property (CELL or CHAR array)
type : value for type property (default: empty)
undolabel: label for undo menu (default: empty) Include this if
transact-set defines the only element of an update group
comment : value for comment property (default: empty)
OUTPUT
obj : undoredo object (update complete)
The field "transaction" is inititialized or appended with
the following data structure:
transaction
+----date (double)
+----content (char array)
+----type (char array)
+----comment (char)
SEE ALSO : logbookgui (currentlty transact_gui)
EXAMPLE
data.value=1:10
db=undoredo(data);
data.value(5)=50;
db=logbookentry(db,'element 5 has been updated','updated');
disp(getdata(db))
ModelitUtilRoot\matlabguru\@undoredo
09-Aug-2008 11:39:04
2230 bytes
mbdredo - redo modifications to undoredo object
CALL
obj=redo(obj,N)
INPUT
obj : undoredo object
N : number of redo steps (default: N=1)
OUTPUT
obj : updated undoredo object
See also: undo
ModelitUtilRoot\matlabguru\@undoredo
15-Aug-2008 15:46:05
624 bytes
setcommitted - set 'committed' status of object to
SUMMARY
When an undoredo object is initialized its committed status is
initialized als TRUE.
Each time the undoredo object is modified, its committed status is
set to FALSE. If an application contains a function that saves the
data to disk, a call to setcomitted can be used to indicate that data
have been saved. If the application is closed a call to setcommitted can
reveal if modifications have been made since last save.
CALL
obj=setcommitted(obj)
obj=setcommitted(obj,comitted)
INPUT
obj: undoredo object
comitted: Comitted status (TRUE or FALSE)
Comitted status is TRUE ==> all transactions have been
comitted (saved to disk of
stored otherwise)
Comitted status is FALSE ==> one or more transaactions
have not been comitted
OUTPUT
obj: updated version of undoredo object
EXAMPLE
(code example save data)
ud = getdata(db);
save(fname,'ud'); %WIJZ ZIJPP OKT 2006
db=setcommitted(db);
store(db);
(code example close request function)
function closereq(hFig,event)
db = get_db;
if isempty(db)||iscomitted(db)
delete(hFig);
return
end
%Ask and store unsaved data
switch questdlg('Save data?,'Close application','Yes','No','Cancel','Yes')
case 'Yes'
savedata(db);
delete(hFig);
case 'No'
delete(hFig);
return
case 'Cancel'
return;
end
See also
undoredo/iscommitted
undoredo/subsasgn
undoredo/mbdvalue
undoredo/isopen
ModelitUtilRoot\matlabguru\@undoredo
08-Aug-2008 22:06:13
2178 bytes
setdata - overload method for "=" operator
SUMMARY
The subsasgn method provides no way to replace the datacontent of an
undoredo object with a new matlab variable. This method does the job.
The extra argument may be used to indicate whether or not the undo
history whould be cleared or not
CALL
obj=setdata(obj,data)
obj=setdata(obj,data,reset)
INPUT
obj: undoredo object
data: new data for object
reset: (optional, defaults to true)
if true reset the undo history of the object
See also: getdata subsasgn
ModelitUtilRoot\matlabguru\@undoredo
02-Dec-2008 19:50:22
3851 bytes
setdepend - register dependency tree for object with window
SUMMARY
A dependency tree is used by evaldepend to derive the so-called
update structure using the substruct arguments used in vatious
assignments to an undoredo object.
A dependency tree is specified in a user defined function that
returns a structure that resembles the datamodel of an application.
At each node of this structue a field "updobj" may be added. This
field should contain a cell array with the name or names of the
update actions that are required when an assignment is made that
effects this node or any of its children.
See the example for an illustration.
CALL
setdepend(db,HWIN,deptree)
INPUT
HWIN : figure handle
db : undoredo object
deptree: dependency tree
OUTPUT
This function returns no output arguments, but registers the
dependency tree in the application data "ur_depend" of the figure
EXAMPLE
function example
%initialize
data.a=1;
data.b.c=2;
HWIN=figure; %create application figure
db=undoredo(data,'disp',@view,'storeh',HWIN,'storef','userdata'); %define database
deptree.a.updobj={'update_a'};
deptree.b.updobj={'update_b'};
deptree.b.c.updobj={'update_c'};
deptree.b.d.updobj={'update_d'};
setdepend(HWIN,db,deptree); %register dependency tree
%end of initialize
%do some assignments and view what happens, make sure the function
%"view" (see below) is available
db.b.c=1;
db=flush(db);
% ==>upd =
% update_a: 0
% update_b: 1
% update_c: 1
% update_d: 0
db.b=1;
db=flush(db);
% ==>upd =
% update_a: 0
% update_b: 1
% update_c: 1
% update_d: 1
db.a=1;
db=flush(db);
% ==>upd =
% update_a: 1
% update_b: 0
% update_c: 0
% update_d: 0
function view(signature,S,ind)
upd=evaldepend(gcf,ind,signature)
See also: getdepend, evaldepend
Create the structure that should be stored
ModelitUtilRoot\matlabguru\@undoredo
20-Apr-2009 11:34:53
3352 bytes
setlabel - set label for undo menu for current group
SUMMARY
In the undo/redo menu, all transactions in a group are presented as
one line. The closegroup commans is used to separate different groups
of transactions. Normally the closegroup command is not needed as the
store method closes a group of transactions before storing the
undoredo object.
closegroup is needed in the specific case where you are performing a
series of operations that should appear seperately in the undo list,
but there is no reason to the store the database in between.
CALL:
obj = setlabel(obj, menustring)
INPUT:
obj: <undoredo object>
menustring: <string> for undo/redo menu
(default value: menustring='Modify object')
NOTE: a new goup must be initialized with a call to ur_assign. Example:
Wrong:
db=setlabel(db,'My group')
db=ur_assign(...); (Result: this group has label "My group")
Right:
db=ur_assign(...);
db=setlabel(db,'My group') (Result: this group has no label)
See also: ur_closegroup, ur_assign
EXAMPLE:
db=ur_assign(db,substruct('.','raai','()',{refindx}),...
[]);
db=ur_assign(...);
db=setlabel(db,'str')
db=ur_closegroup(db);
ModelitUtilRoot\matlabguru\@undoredo
17-Aug-2008 15:20:21
1801 bytes
setprop - implement set method for undoredo object
SUMMARY:
This methods implements a "set" method for fields of an undoredo that
normally are not visible. The function is intentinally not
documented.
INPUT
<option>,<argument>
OUTPUT
none
ModelitUtilRoot\matlabguru\@undoredo
17-Aug-2008 15:22:01
1281 bytes
show - directlty caal the display function of an undoredo object.
SUMMARY
Usage of this function allows one to paint (part of) the interface
without changing the data.
Note: usage of this function is not recommended programming practice.
In almost any case the objective cabn be reached by using the
flush method.
CALL
show(db,ind)
INPUT
db: undoredo object
ind: struct array with fields (defaultvalue='all')
type: '()'/'[]'/{}','.'
subs: cell array
EXAMPLE
typical use: visualise parts of the GUI that does not depend on the database
but instead on the value of for example a uicontrol
FAQ SECTION
Problem:
show(db,'all') does not give the expected result (nothing happens)
Cause:
the 'all' argument (string) is passed on as {'all'} (cell array)
Remedy:
use show(db). This will call the display function with 'all' as an argument
ModelitUtilRoot\matlabguru\@undoredo
10-Aug-2008 19:07:05
1404 bytes
store - store object with specified handle and field
SUMMARY
When an undoredo object is created the properties "storehandle" and
"storefield" may be specified. This allows an undoredo object to
store itself.
The store operator is similar to the commit action known in
databases.
Before an undoredo object is stored the group of transactions is
closed, so that the next change will initialize a new group.
CALL
store(obj)
INPUT
obj : undoredo object
OUTPUT
this function returns no output arguments, but updates userdata or
applicationdata of specified handle
See also: closegroup
ModelitUtilRoot\matlabguru\@undoredo
17-Aug-2008 15:19:40
1177 bytes
subsasgn - equivalent to Matlab subsasgn but fo undo objects
CALL
obj=subsassgn(obj,ind,data)
INPUT
obj: current object
obj: undoredo object
obj.history: all data that is needed to perform
undo and redo actions on object
obj.data: data contents of object
ind: substruct array with fields (see substruct)
type: '()'/'[]'/{}','.'
subs: cell array
data: the contents of this field depend on the mode of operation:
SEE ALSO
getdata
label
flush
store
ModelitUtilRoot\matlabguru\@undoredo
20-Jan-2007 19:40:41
3980 bytes
subsref - overloaded subsref function for undoredo object
CALL
[data,varargout]=subsref(obj,ind)
INPUT
obj: undoredo object
ind: suvsref expression
OUTPUT
result from subseref statement on object
NOTES
KNOWN RESTRICTIONS
In most cases undoredo objects can be applied using the same synatax
as normal matlab variables. A few exceptions exist:
STRVCAT
%Observe the following behavior:
U.a=struct('b',{'first','second','third'}) % U.a is a 3 element struct array
str1=strvcat(U.a.b) %str1 is a 3x6 char array
UU=undoredo(U)
str2=strvcat(UU.a.b)%str2 = 'first'
%Work-around:
aa=UU.a %returns "normal" struct array
st3=strvcat(aa.b)%str2 = 'first' %str3 is a 3x6 char array
%Background: If S is a Nx1 struct array, S.b returns N outputs.
Undoredo objects only return 1 output.
ModelitUtilRoot\matlabguru\@undoredo
08-May-2009 08:41:38
5878 bytes
subsref - overloaded subsref function for undoredo object
ModelitUtilRoot\matlabguru\@undoredo
25-Jul-2007 14:00:01
4438 bytes
undo - undo modifications to undoredo object
CALL
obj=undo(obj,N)
INPUT
obj : undoredo object
N : number of undo steps (default: N=1)
OUTPUT
obj : updated undoredo object
See also: redo
ModelitUtilRoot\matlabguru\@undoredo
15-Aug-2008 15:46:06
520 bytes
undoredo - constructor for undoredo object
CALL:
undoredo(data,<property1>,<value1>,<property2>,<value2>,...)
INPUT
data: initial data content of undoredo object
PROPERTIES:(BY CATEGORY, ALL PROPERTIES ARE OPTIONAL)
Display properties
- displayfunction: function to call when content of undoredo object changes
Possible values : string with name of function or function-pointer
Default value : '' (no display function will be called)
Remark : this function is called in the following way:
feval(signature,update,data,queued);
with:
update: function (string or pointer)
data : full data structure
queued: cell array containing information on modified fields
- signature: approximate time of object creation. Used as a reference
to the undoredo object (without requiring to pass on
the full object and its data). Signatureis passed on to
the objects displayfunction). Specify this property if
the content of the workspace is replaced, but no new
call is applied.
- dbname: typically "opt" or "db", but any string that qualifies as a
structure-fieldnmae is allowed. This field may be used in calls
using "retrieve". Examples:
db =retrieve(HWIN,'db')
opt=retrieve(HWIN,'opt')
Autobackup properties
- backupfile: name of autobackup file (empty string: do not make backups)
Possible values : char str (Current path will be added to
filename)
Default value : '' (no automatic backups)
See also : ur_cleanupdisk
- timeoflastbackup: moment of last backup
Possible values : Matlab datenum
Default value : now()
- backupinterval: time between timed backups (1/1440= 1 minute)
Possible values : Any numeric value>0
Default value : 10/1440 (10 minutes between backups)
Undo/Redo properties
- mode: undo mode
Possible values : 'simple', 'memory', 'cached'
Simple: No undo. Use this option is no
undo is required.
memory: undo info stored in memory. Use
this option if no massive datasets are
needed
cached: undo info cached to disk if
needed. Use this option if
workspace contains many MB
Default value : 'memory'
Undo/Redo properties (continued)): Autocache properties
- cachefile: (applies only if mode=='cached')
Possible values : char str
Default value : 'urcache'
See also : ur_cleanupdisk
Remark : the name of the cache files will be derived from this parameter
- maxbytes : maximum number of bytes stored in memory before saving to disk
Possible values : integers>0
Default value : 64 Mb
Autostore properties
- storehandle: handle of GUI object with which undoredo object is saved
Possible values : a valid handle of a GUI object (for example a figure handle)
Default value : []
See also : mbdstore
- storefield: name of application data field to store undo redo object in
Possible values : char array
Default value : ''
See also : mbdstore
Remark : setting storefield=='userdata' causes undoredo data to be saved as:
set(storehandle,'userdata',obj)
setting storefield~='userdata' causes undoredo data to be saved as:
setappdata(storehandle,storefield,obj)
Other properties
- comitted: commited to disk. This value is set to 0 if contents of undoredo changes
Possible values : 0,1
Default value : 1
See also : iscommited, setcomitted
Remark : if this parameter equals zero, there may be unsaved data
OUTPUT
obj: undoredo object
obj.history: all data that is needed to perform
undo and redo actions on object
obj.data: data contentst of object
EXAMPLE
u=undoredo(struct('field1',1','field2',2),...
'backupfile',C.autosave,...
'backupinterval',C.backupint',...
'displayfunction',@guishow,...
'storehandle',gcf,...
'storefield','undodata');
store(u); %store data
ModelitUtilRoot\matlabguru\@undoredo
05-May-2009 11:19:29
5663 bytes
ur_assign - equal to subsasgn
SUMMARY
Before undoredo class was defined, undo functionality was implemented
by the mbdundo function. The method ur_assign works on the objects
created in this way, and still may exist in some code.
NOTE
Any call to ur_assign should be replaced by equivalent call to subsasgn
CALL/INPUT/OUTPUT
see undoredo/subsasgn
ModelitUtilRoot\matlabguru\@undoredo
17-Aug-2008 16:10:05
549 bytes
NOTE: this function will be phased out and will be replaced by evaldepend this function is intentionally left undocumented
ModelitUtilRoot\matlabguru\@undoredo
09-Aug-2008 15:22:38
2253 bytes
THIS METHOD IS FOR INTERNAL USE IN UNDOREDO TOOLBOX
add2cache - check if item needs to be added to cache file
CALL
obj=add2cache(obj)
INPUT
obj: undoredo object
obj.history.cur_transact equals the last completed transaction
OUTPUT
obj: undoredo object, cache file modified
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 16:10:27
1463 bytes
autosave - timed backup of data
CALL
autosave(fname,data)
INPUT
fname
name of cache file
data
data that will be saved
OUTPUT
none
NOTE: this function is called from undoredo/subsasgn and undoredo/setdata
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:10:49
631 bytes
cachecleanup - The first time an undoredo object is changed after on or
more undo's all cache files that refer to later transactions are deleted
CALL
obj=cachecleanup(obj)
INPUT
obj: undoredo object
obj.history.cur_transact equals the last completed transaction
OUTPUT
obj: undoredo object, cache file modified
ModelitUtilRoot\matlabguru\@undoredo\private
27-May-2006 14:15:07
2352 bytes
cachename - return name for cache file
CALL
str=cachename(obj,N,type)
INPUT
obj
undoredor object
N
integer
type
string
OUTPUT
str
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:12:32
339 bytes
currentcache - determine current cache file (if existent)
CALL
[f,indexf]=currentcache(obj,targetstep)
INPUT
obj: undoredo object
targetstep: step to complete
OUTPUT
f
indexf
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:15:06
1070 bytes
deletecachefile - delete file for undoredo object
CALL
deletecachefile(fname)
INPUT
fname: filename to delete
OUTPUT
none
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:17:35
668 bytes
THIS METHOD IS FOR INTERNAL USE IN UNDOREDO TOOLBOX
emptyhistory - create an empty history array for an undo object
CALL
history=emptyhistory(data,mode)
INPUT
obj: undoredo object: the following fields are used
obj.data
obj.mode
data: initial data for object
mode: mode of operation (simple,memory,cached)
OUTPUT
history: history structure
CALLED FROM
mbdundomenu, mbdundoobj
ModelitUtilRoot\matlabguru\@undoredo\private
27-May-2006 15:53:28
1643 bytes
emptytransact - initialize transaction record with empty data
CALL
S=emptytransact
INPUT
none
OUTPUT
transaction record with empty data
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:19:22
514 bytes
mbdundoobj - ininitialize undoredo object
FUTURE NAME: this function will be superseeded by undoredo
CALL:
mbdundoobj(data,<property1>,<value1>,<property2>,<value2>,...)
INPUT
data: initial data content of undoredo object
PROPERTIES:(BY CATEGORY, ALL PROPERTIES ARE OPTIONAL)
Display properties
- displayfunction: function to call when content of undoredo object changes
Possible values : string with name of function or function-pointer
Default value : '' (no display function will becalled)
Remark : this function is called in the following way:
feval(signature,update,data,queued);
with:
update: function (string or pointer)
data : full data structure
queued: cell array containing information on modified fields
- signature: approximate time of object creation. Used as a reference
to the undoredo object (without requiring to pass on
the full object and its data). Signatureis passed on to
the objects displayfunction). Specify this property if
the content of the workspace is replaced, but no new
call is applied.
- dbname: typically "opt" or "db", but any string that qualifies as a
structure-fieldnmae is allowed. This field may be used in calls
using "retrieve". Examples:
db =retrieve(HWIN,'db')
opt=retrieve(HWIN,'opt')
Autobackup properties
- backupfile: name of autobackup file (empty string: do not make backups)
Possible values : char str (Current path will be added to
filename)
Default value : '' (no automatic backups)
See also : ur_cleanupdisk
- timeoflastbackup: moment of last backup
Possible values : Matlab datenum
Default value : now()
- backupinterval: time between timed backups (1/1440= 1 minute)
Possible values : Any numeric value>0
Default value : 10/1440 (10 minutes between backups)
Undo/Redo properties
- mode: undo mode
Possible values : 'simple', 'memory', 'cached'
Simple: No undo. Use this option is no
undo is required.
memory: undo info stored in memory. Use
this option if no massive datasets are
needed
cached: undo info cached to disk if
needed. Use this option if
workspace contains many MB
Default value : 'memory'
Undo/Redo properties (continued)): Autocache properties
- cachefile: (applies only if mode=='cached')
Possible values : char str
Default value : 'urcache'
See also : ur_cleanupdisk
Remark : the name of the cache files will be derived from this parameter
- maxbytes : maximum number of bytes stored in memory before saving to disk
Possible values : integers>0
Default value : 64 Mb
Autostore properties
- storehandle: handle of GUI object with which undoredo object is saved
Possible values : a valid handle of a GUI object (for example a figure handle)
Default value : []
See also : mbdstore
- storefield: name of application data field to store undo redo object in
Possible values : char array
Default value : ''
See also : mbdstore
Remark : setting storefield=='userdata' causes undoredo data to be saved as:
set(storehandle,'userdata',obj)
setting storefield~='userdata' causes undoredo data to be saved as:
setappdata(storehandle,storefield,obj)
Other properties
- comitted: commited to disk. This value is set to 0 if contents of undoredo changes
Possible values : 0,1
Default value : 1
See also : iscommited, setcomitted
Remark : if this parameter equals zero, there may be unsaved data
OUTPUT
obj: undoredo object
obj.history: all data that is needed to perform
undo and redo actions on object
obj.data: data contentst of object
EXAMPLE
u=mbdundoobj(struct('field1',1','field2',2),...
'backupfile',C.autosave,...
'backupinterval',C.backupint',...
'displayfunction',@guishow,...
'storehandle',gcf,...
'storefield','undodata');
mbdstore(u); %store data
KNOWN RESTRICTIONS
In most cases undoredo objects can be applied using the same synatax
as normal matlab variables. A few exceptions exist:
STRVCAT
%Observe the following behavior:
U.a=struct('b',{'first','second','third'}) % U.a is a 3 element struct array
str1=strvcat(U.a.b) %str1 is a 3x6 char array
UU=undoredo(U)
str2=strvcat(UU.a.b)%str2 = 'first'
%Work-around:
aa=UU.a %returns "normal" struct array
st3=strvcat(aa.b)%str2 = 'first' %str3 is a 3x6 char array
%Background: If S is a Nx1 struct array, S.b returns N outputs.
Undoredo objects only return 1 output.
ModelitUtilRoot\matlabguru\@undoredo\private
01-Dec-2009 00:03:00
11753 bytes
mbdvalue - evaluate value of undo object for specific transaction number
CALL
obj=mbdvalue(obj,targetstep)
INPUT
obj: undo object
obj.history.cur_transact: currently completed step
targetstep: number of modifications to include
OUTPUT
obj: undoredo object after applying required undo/redo operations
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:24:04
8796 bytes
This function is now obsolete
ModelitUtilRoot\matlabguru\@undoredo\private
20-Apr-2009 11:34:53
5406 bytes
THIS METHOD IS FOR INTERNAL USE IN UNDOREDO TOOLBOX
CALL
status=undostatus(obj)
INPUT
obj: undo structure
OUTPUT
status: structure with undo information
status.selected: currently selected item
status.list: char array with menu choices
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:25:27
1448 bytes
undovalue - evaluate value of undo object for specific transaction number
CALL
obj=mbdvalue(obj,targetstep)
INPUT
obj: undo object
obj.history.cur_transact: currently completed step
targetstep: number of modifications to include
OUTPUT
obj: undoredo object after applying required undo/redo operations
NOTE
undovalue and mbdvalue seem identical, mbdvalue needs to be removed
eventually
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:27:55
8890 bytes
ur_cleanupdisk - delete all cache- and autosave files that belong to object
CALL
ur_cleanupdisk(obj)
INPUT
obj
undoredo object. Cache files and backup files will be deleted
OUTPUT
none
ModelitUtilRoot\matlabguru\@undoredo\private
20-Mar-2009 15:56:59
635 bytes
ur_deletecache - delete all cache files from disk
CALL
ur_deletecache(obj)
INPUT
obj: undoredo object. Cache files will be detected and deleted
OUTPUT
none
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:31:36
713 bytes
ur_load - load data from file for undoredo object
CALL
data=ur_load(obj,cache_nr,type)
INPUT
obj
undoredo object
cache_nr
number of cache file
type
type of cache file
OUTPUT
data
retrieved data
ModelitUtilRoot\matlabguru\@undoredo\private
20-Mar-2009 15:57:00
1025 bytes
ur_save - save cache data data: use correct file- and variable name
CALL
ur_save(obj,cache_nr,type,data)
INPUT
obj:
undoredo object
cache_nr:
cache file number
type:
cache file type
data:
data that need saving
OTPUT
none
ModelitUtilRoot\matlabguru\@undoredo\private
17-Aug-2008 17:36:16
1083 bytes
FUTURE NAME: ur_depend
CALL
upd=mdlt_dependencies(ind,thistree,othertree1,othertree2,...)
INPUT
ind : substruct met te wijzigen velden OF waarde "all"
thistree : tree waarop "ind" betrekking heeft
othertree: wordt alleen gebruik om velden te inventariseren
OUTPUT
upd: een structure waar per scherm optie staat of deze geupdate moeten worden (1) of niet (0)
ModelitUtilRoot\matlabguru\undoredocopy
28-Jun-2010 17:11:46
1954 bytes
mdlt_initupd - initialize update fields structure with INIT value
FUTURE NAME: ur_depend_init
%
CALL
upd=mdlt_initupd(agtree,INIT,oldupd)
INPUT
agtree
dependency tree
INIT
Initialize all fields of output structuer with this value.
Defaults to false.
oldupd
if specified. The output argument "upd" is initialized with this
value.
OUTPUT
upd
update structure after initialization
EXAMPLE
case 'install'
HWIN=create_fig; %create GUI objects
tree=GUIstructure; %define dependencies in GUI
agtree=mdlt_mastertree(tree); %aggregate tree (for fast searching)
setappdata(HWIN,'mastertree',agtree) %store result with this window for future use
..
% SOME USER ACTION
% "ind" now indexes in changed data fields
..
agtree=setappdata(HWIN,'mastertree') %retrieve result
upd=mdlt_initupd(agtree,0); %initialize all with 0
upd=mdlt_look4change(upd,agtree,ind); %find out which GUI elements need to be updated
show(upd); %call some function that selectively updates GUI
See also:
mdlt_mastertree: generate aggregate dependency tree
mdlt_initupd: initialize update structure
mdlt_look4change: find out which field must be updated as a result of a
structure update
COPYRIGHT
Nanne van der Zijpp
Modelit
Jan 2002
ModelitUtilRoot\matlabguru\undoredocopy
17-Aug-2008 14:01:49
2114 bytes
mdlt_look4change: find out which field must be updated as a result of a
structure update
FUTURE NAME: ur_depend_apply
CALL
upd=mdlt_look4change(upd,ind,agtree)
INPUT
upd: previous update structure
ind: subs array into struct
agtree: tree containing list of screen attributes to be updated
CODE EXAMPLE
case 'install'
HWIN=create_fig; %create GUI objects
tree=GUIstructure; %define dependencies in GUI
agtree=mdlt_mastertree(tree); %aggregate tree (for fast searching)
setappdata(HWIN,'mastertree',agtree) %store result with this window for future use
..
% SOME USER ACTION
% "ind" now indexes in changed data fields
..
agtree=setappdata(HWIN,'mastertree') %retrieve result
upd=mdlt_initupd(agtree,0); %initialize all with 0
upd=mdlt_look4change(upd,agtree,ind); %find out which GUI elements need to be updated
show(upd); %call some function that selectively updates GUI
SEE ALSO
mdlt_mastertree: generate aggregate dependency tree
mdlt_initupd: initialize update structure
mdlt_look4change: find out which field must be updated as a result of a
structure update
REVISIONS
JUNE 2005: major redesign. This may affect details of the
functionality. The field updobjAggreg is introduced. This field
contains the content of updobj of the child nodes EXCLUDING current
node (downward aggregation).
updobj contains the content of updobj of this node and its parents
(upward aggregation)
ModelitUtilRoot\matlabguru\undoredocopy
20-Apr-2009 11:34:56
3481 bytes
mdlt_mastertree - aggregate "updobj" over structure
FUTURE NAME: ur_depend_set
CALL
tree=mdlt_mastertree(tree)
INPUT
tree: dependency tree that defines which object need to be updated if field changes
OUTPUT
tree: dependency tree that defines which object need to be updated if field changes
the field updob has now been updated
NOTE:
atributes need not be defined at the lowest level, attributes passed on at a higher
level will be superimposed with the lower level attributes
CODE EXAMPLE
case 'install'
HWIN=create_fig; %create GUI objects
tree=GUIstructure; %define dependencies in GUI
agtree=mdlt_mastertree(tree); %aggregate tree (for fast searching)
setappdata(HWIN,'mastertree',agtree) %store result with this window for future use
..
% SOME USER ACTION
% "ind" now indexes in changed data fields
..
agtree=setappdata(HWIN,'mastertree') %retrieve result
upd=mdlt_initupd(agtree,0); %initialize all with 0
upd=mdlt_look4change(upd,agtree,ind); %find out which GUI elements need to be updated
show(upd); %call some function that selectively updates GUI
SEE ALSO
setdepend: part of undoredo toolbox
evaldepend: part of undoredo toolbox
mdlt_mastertree: generate aggregate dependency tree
mdlt_initupd: initialize update structure
mdlt_look4change: find out which field must be updated as a result of a
structure update
REVISIONS
JUNE 2005: major redesign. This may affect details of the
functionality. The field updobjAggreg is introduced. This field
contains the content of updobj of the child nodes EXCLUDING current
node (downward aggergation).
updobj contains the content of updobj of this node and its parents
(upward aggregation)
ModelitUtilRoot\matlabguru\undoredocopy
17-Aug-2008 13:57:44
4705 bytes
ur_getopt - initialize GUI options by reading them from file
SUMMARY
User preferences are settings that apply to the appearance of a
specific figure. When a figure is closed and opened later on users
typically expect that the figure re-appears with identical settings.
To accomplish this, the user preferences should be saved when the figure
closes and loaded again when the figure is created. Saving the data can
best be doen in the figure's deletefunction. Loading the data typically
is done in a function that by convention has the name "initOpt" (but any
other name is allowed)
This function initializes the data structure that represents the user
preferences. This is done in 3 steps:
• create a structure that contains the factory defaults. This is to make
sure that no errors will occur if the figure is openened for the first
time;
• load the user preference s as saved when the figure was closed last
time (see function template "deletef"), and overwrite the factory
defaults with the values that are loaded from file. This is done by the
function "ur_getopt";
• set any values that are specific for the current session. For example,
you may store object handles in the user-preference structure.
CALL
opt=ur_getopt(defopt,OPTFILE,varname)
INPUT
defopt : default options (current function overwerites these)
OPTFILE: binary file in which options have been saved earlier
varname: variable name in which options are stored (defaults to "opt)
OUTPUT
opt: options structure in which data from defopt and OPTFILE are combined
SEE ALSO
ur_cleanupdisk
EXAMPLE
Specify this delete function:
Specify this initopt function:
function opt=initopt
defopt=struct('field1',100,'field2',200); %define default options
opt=ur_getopt(defopt,'options.stt'); %use saved options
function opt=deletef
try
opt=getdata(retrieve(gcf,'opt'))
save(opt.sttBackupName,'opt');
catch
end
ModelitUtilRoot\matlabguru\undoredocopy
01-Dec-2009 00:31:10
3001 bytes
structarray2table - convert array of structures to stucture of arrays
CALL:
T = structarray2table(S, VERBOSE)
INPUT:
S(N): array of structures (structarray)
+----M1(1)
+----M2(1)
+----M3(1)
OUTPUT:
T(1): structure of arrays (tablestruct)
+----M1(N,1)
+----M2(N,1)
+----M3(N,1)
See also: table2structarray
ModelitUtilRoot\table
17-Apr-2010 11:10:20
3580 bytes
tableRead - lees een stuurfile met een tabel in, kolom format kan
gespecificeerd worden
CALL:
T = tableRead(fname, fields, formats)
INPUT:
fname: naam van het in te lezen bestand
fields: cellstring met namen van de kolommen van de tabel
formats: format van elke kolom, b.v. {'%s','[%f %f %f]'}
delimiter: string met scheidingsteken, default ';'
OUTPUT:
T: ingelezen tabel, empty if error occurred
errmsg: string met eventuele foutmelding
ModelitUtilRoot\table
17-Mar-2010 10:24:38
3077 bytes
tableheight - get height (number of rows) of table
CALL:
N = tableheight(S)
INPUT:
S: <struct> a table structure
OUTPUT
N: <integer> height of table
See alos: istable
ModelitUtilRoot\table
16-Jan-2007 17:07:16
388 bytes
tableselect - select data from struct of arrays (both rows and columns)
CALL:
T = tableselect(S,indx,flds)
T = tableselect(S,indx)
T = tableselect(S,flds)
INPUT:
S: struct of arrays(tablestruct), all fields must be (N x 1)
indx: index array or logical vector
flds: cell array
OUTPUT:
S: struct of arrays (tablestruct), all fields must be (N x 1)
See also: table2structarray, tableunselect, structarrayselect
ModelitUtilRoot\table
06-Nov-2009 15:18:14
2538 bytes
serializeDOM - serialise a DOM by transformation to a string or file
CALL:
String = serialize(DOM)
serialize(DOM,fileName)
INPUT:
DOM: <java-object> org.apache.xerces.dom.DocumentImpl
fileName: <string> (optional) valid filename
OUTPUT:
String: <string> if nargin == 1 the serialised DOM
<boolean> if nargin == 2,
0 -> saving to fileName not successful
1 -> saving to fileName successful
EXAMPLE:
obj = xml %create an empty xml object
obj.date = now %add fields with values
obj.type = 'test'
save(obj,'test.xml')
obj('test.xml')
inspect(obj)
See also: xml, xml/inspect, xml/view, xml/save
Revisions
20100825 (ZIJPP): modified help info
ModelitUtilRoot\xml_toolbox
25-Aug-2010 12:27:56
2165 bytes
ModelitUtilRoot\xml_toolbox
29-Aug-2010 18:02:31
5162 bytes
addns - add a namespace definition to the xml-object
CALL:
obj = addns(obj,S)
INPUT:
obj: <xml-object>
S: <struct> fieldnames --> namespace variable
values --> namespace value
<cell array> nx2, first column --> namespace variable
second column --> namespace value
OUTPUT:
obj: <xml-object>
EXAMPLE
%create an xml-object
obj = xml(fullfile(pwd,'examples','namespaces.xml'))
%try to get attribute
obj.width.('@nsdim:dim')
%add namespace
addns(obj,{'nsdim','http://www.modelit.nl/dimension'})
%get attribute
obj.width.('@nsdim:dim')
See also: xml, xml/listns, xml/clearns, xml/removens, xml/getns
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 06:19:02
947 bytes
clearns - remove all the namespace definitions from the xml-object
CALL:
obj = addns(obj,S)
INPUT:
obj: <xml-object>
S: <struct> fieldnames --> namespace variable
values --> namespace value
<cell array> size: nx2, first column --> namespace variable
second column --> namespace value
OUTPUT:
obj: <xml-object> with no namespace definitions
EXAMPLE
%create an xml-object
obj = xml(fullfile(pwd,'examples','namespaces.xml'))
%add namespaces
addns(obj,{'ns','http://www.w3schools.com/furniture'})
addns(obj,{'nsdim','http://www.modelit.nl/dimension'})
%list namespaces
listns(obj)
%clear all defined namespaces
clearns(obj)
%list namespaces
listns(obj)
See also: xml, xml/listns, xml/addns, xml/removens, xml/getns
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 06:21:52
1002 bytes
display - display information about an xml-object on the console CALL: display(obj) INPUT: obj: <xml-object> OUTPUT: none, information about the xml-object is displayed on the console EXAMPLE: %create an xml from a sourcefile obj = xml(fullfile(pwd,'examples','books.xml')) %display function was automatically called by Matlab See also: xml, display
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 06:34:26
1764 bytes
fieldNames - get the names of the direct children of the root node
c.f. the function fieldnames for structures
CALL:
fields = fieldnames(obj)
INPUT:
obj: <xml-object>
OUTPUT:
fields: <cellstring> with the nodenames of the children of the root node
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
fieldnames(obj)
book1 = obj.book(1)
fieldnames(book1{1})
See also: xml, xml/getRoot, xml/noNodes, xml/isfield
ModelitUtilRoot\xml_toolbox\@xml
26-Jun-2008 00:12:41
1527 bytes
get - get the value of the specified property for an xml-object (from the
object itself not from the xml)
CALL:
prop_val = get(obj,prop_name)
INPUT:
obj: <xml-object>
prop_name: <string> propertyname, possible values:
- DOM <org.apache.xerces.dom.DeferredDocumentImpl>
with the DOM representation of the xml
- file <string> with filename
- NS <java.util.HashMap> with namespaces
OUTPUT:
prop_val: the value of the specified property for the xml-object
<struct> with all properties plus values if nargin == 1
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
%get all property-value pairs
get(obj)
%get the (D)ocument (O)bject (M)odel
get(obj,'DOM')
See also: xml, xml/set
ModelitUtilRoot\xml_toolbox\@xml
01-Jun-2006 17:27:18
1547 bytes
getRoot - get the root node of an xml-object and its name
CALL:
[rootname root] = getRoot(obj)
INPUT:
obj: <xml-object>
OUTPUT:
rootname: <string> the name of the root node
root: <java object> org.apache.xerces.dom.DeferredElementNSImpl or
org.apache.xerces.dom.DeferredElementImpl
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
[rootname root] = getRoot(obj)
See also: xml, xml/noNodes
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 06:36:14
648 bytes
getns - retrieve a namespace definition from the xml-object
CALL:
S = listns(obj,key)
INPUT:
obj: <xml-object>
key: <string> with a namespace variable for which the definition has to
be retrieved
OUTPUT:
S: <string> with the namespace definition
EXAMPLE
%create an xml-object
obj = xml(fullfile(pwd,'examples','namespaces.xml'))
%add namespace
addns(obj,{'nsdim','http://www.modelit.nl/dimension'})
%get namespace
getns(obj,'nsdim')
See also: xml, xml/addns, xml/clearns, xml/removens, xml/listns
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 06:20:28
685 bytes
inspect - visualize the xml document as a tree in a separate window
CALL:
inspect(obj)
INPUT:
obj: <xml-object>
OUTPUT:
none, the DOM representation of the xml document appears as a tree
in a separate window
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
inspect(obj)
See also: xml, xml/view
ModelitUtilRoot\xml_toolbox\@xml
01-Oct-2009 15:24:15
2191 bytes
isempty - true if the xml-object has no fields
CALL:
tf = isempty(obj)
INPUT:
obj: <xml-object>
OUTPUT:
tf: <boolean> true if the DOM representation of the xml document does
not contain any nodes, or equivalently the xml-document
has no fields
EXAMPLE:
%create an empty xml-object
obj = xml
isempty(obj)
%add a field to the xml-object
obj.field = 'field'
isempty(obj)
%remove field from the xml-object
rmfield(obj,'field');
isempty(obj)
See also: xml, xml/noNodes, xml/fieldnames, xml/getRoot, xml/rmfield
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 06:53:38
790 bytes
isfield - true if at least one node satisfies the indexing 'sub'
CALL:
tf = isfield(obj,field)
INPUT:
obj: <xml-object>
sub: <string> index into xml document (same format as indexing into
Matlab structures) e.g. 'book(1)' or 'book(1).title'
result in the same substructs as would be obtained if
S.book(1)or S.book(1).title were used (S a Matlab
structure)
<string> with xpath expression
OUTPUT:
tf: <boolean> true if at least one node satisfies the indexing 'sub'
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
isfield(obj,'book(1:2)')
isfield(obj,'book(2).@category')
%N.B. in the following statement true is return although the number of
%books is 4, this is because book(2:4) exist
isfield(obj,'book(2:10)')
%examples with xpath expression
%are there any books in english?
isfield(obj,'bookstore/book/title[@lang=''en'']')
%are there any books in spanish?
isfield(obj,'bookstore/book/title[@lang=''es'']')
%are there books cheaper than 30 euro
isfield(obj,'bookstore/book[price < 30]')
See also: xml, xml/fieldNames, xml/rmfield
ModelitUtilRoot\xml_toolbox\@xml
02-Jun-2006 17:17:54
1618 bytes
listns - list the namespace definitions of the xml-object
CALL:
listns(obj)
INPUT:
obj: <xml-object>
OUTPUT:
no direct output, the defined namespaces are displayed on the console
EXAMPLE
%create an xml-object
obj = xml(fullfile(pwd,'examples','namespaces.xml'))
%no namespaces defined yet
listns(obj)
%add namespaces
addns(obj,{'ns','http://www.w3schools.com/furniture'})
addns(obj,{'nsdim','http://www.modelit.nl/dimension'})
%list namespaces
listns(obj)
See also: xml, xml/addns, xml/clearns, xml/removens, xml/getns
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 06:21:22
995 bytes
noNodes - get the total number of nodes present in the DOM-representation
of the xml document
CALL:
N = noNodes(obj)
INPUT:
obj: <xml-object>
OUTPUT:
N: <integer> with the total number of nodes in the DOM object
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
noNodes(obj)
See also: xml, xml/getRoot
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 06:14:12
585 bytes
removens - remove a namespace definition from the xml-object
CALL:
obj = removens(obj,S)
INPUT:
obj: <xml-object>
S: <char array> with names of the namespace definitions to be removed
<cell array> with names of the namespace definitions to be removed
OUTPUT:
obj: <xml-object>
EXAMPLE
%create an xml-object
obj = xml(fullfile(pwd,'examples','namespaces.xml'))
%add namespace
addns(obj,{'nsdim','http://www.modelit.nl/dimension'})
%get attribute
obj.width.('@nsdim:dim')
%remove namespace
removens(obj,{'nsdim'})
%try to get attribute
obj.width.('@nsdim:dim')
See also: xml, xml/listns, xml/clearns, xml/addns, xml/getns
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 06:26:44
896 bytes
rmfield - remove elements and attributes from an xml-object which satisfy
the indexing 'sub'
CALL:
rmfield(obj,sub)
INPUT:
obj: <xml-object>
sub: <string> index into xml document (same format as indexing into
Matlab structures) e.g. 'book(1)' or 'book(1).title'
result in the same substructs as would be obtained if
S.book(1)or S.book(1).title were used (S a Matlab
structure)
<string> with xpath expression
OUTPUT:
none, the xml-object is updated
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
rmfield(obj,'book(1:2)')
rmfield(obj,'book(2).@category')
inspect(obj)
%examples with xpath expression
obj = xml(fullfile(pwd,'examples','books.xml'))
%remove books cheaper than 30 euro
rmfield(obj,'bookstore/book[price < 30]')
inspect(obj)
obj = xml(fullfile(pwd,'examples','books.xml'))
%remove books in the category 'WEB'
rmfield(obj,'bookstore/book[@category = ''WEB'']')
inspect(obj)
See also: xml, xml/fieldNames, xml/isfield
ModelitUtilRoot\xml_toolbox\@xml
02-Jun-2006 17:18:00
1731 bytes
save - save the xml-object as an xml file
CALL:
obj = save(obj,fname)
INPUT:
obj: <xml-object>
fname: <string> (optional) the name of the xml file, if fname is not
specified a save dialog will pop up
OUTPUT:
obj: <xml-object> the file field of the xml-object is updated and an xml
file is created
EXAMPLE:
obj = xml %create an empty xml object
obj.date = datestr(now) %add fields with values
obj.description = 'test'
obj = save(obj,'test.xml') %save object by specifying filename
obj = xml('test.xml')
inspect(obj);
See also: xml, xml/view, xml/inspect
ModelitUtilRoot\xml_toolbox\@xml
06-Jun-2006 07:00:26
1336 bytes
selectNodes - select nodes from the XML DOM tree
CALL:
nodesList = selectNodes(obj,ind)
INPUT:
obj: <xml-object>
ind: <struct array> with fields
- type: one of '.' or '()'
- subs: subscript values (field name or cell array
of index vectors)
<string> with an xpath expression
OUTPUT:
nodesList: <java object> java.util.ArrayList with tree nodes
See also: xml, xml/xpath, xml/subsref, xml/subsasgn,
xml/private/buildXpath
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 07:17:48
968 bytes
set - set the value of the specified property for an xml-object
CALL:
set(obj,prop_name,prop_value)
INPUT:
obj: <xml-object>
prop_name: <string> propertyname, possible values:
- DOM <org.apache.xerces.dom.DeferredDocumentImpl>
with the DOM representation of the xml
- file <string> with filename
- NS <java.util.HashMap> with namespaces
prop_value: the value of the property to be set for the xml-object
OUTPUT:
obj: <xml-object> with the property prop_name set to prop_value
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
%get all property-value pairs
get(obj,'file')
%get the (D)ocument (O)bject (M)odel
obj = set(obj,'file',fullfile(pwd,'examples','books_changed.xml'))
get(obj,'file')
See also: xml, xml/get
ModelitUtilRoot\xml_toolbox\@xml
01-Jun-2006 17:28:20
2025 bytes
storeStructure - store contents of structure in xml object
CALL:
obj = storeStructure(obj,S)
INPUT:
S: <struct> or <struct array>
OUTPUT:
obj: <xml-object>
EXAMPLE:
obj=xml
obj = storeStructure(obj,S)
inspect(obj)
NOTES
- This function is called by xml object constructor, in case where
constructure is called with a structure as its input argument.
- Although alternative uses of this method may be possible, they have
not yet been tested
See also: xml
ModelitUtilRoot\xml_toolbox\@xml
29-Aug-2010 17:49:10
3316 bytes
subsasgn - assign new values to the xml document in an xml-object
CALL:
obj = subsassgn(obj,ind,data)
INPUT:
obj: <xml-object>
ind: <struct array> with fields
- type: one of '.' or '()'
- subs: subscript values (field name or cell array
of index vectors)
<string> with an xpath expression
data: (optional) with the values to be put in the by ind defined
fields in the xml-object, allowed types:
- <struct> matlab structure
- <xml-object>
- <org.apache.xerces.dom.ElementImpl>
OUTPUT:
obj: <xml-object>
See also: xml, xml/subsref, xml/xpath, subsasgn
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 07:14:00
960 bytes
subsref - subscripted reference for an xml object
CALL:
S = subsref(obj,ind)
INPUT:
obj: <xml-object>
ind: <struct array> with fields
- type: one of '.' or '()'
- subs: subscript values (field name or cell array
of index vectors)
<string> with an xpath expression
OUTPUT:
S: <cell array> with contents of the referenced nodes, can contain
xml objects, strings or numbers
See also: xml, xml/subsasgn, xml/xpath, subsref
ModelitUtilRoot\xml_toolbox\@xml
08-Jun-2006 07:13:44
665 bytes
view - convert the xml-object into a string
Note: This method has been superseeded by the method "xml2str"
the method view is kept for backward compability but will become
obsolete in the future.
CALL:
view(obj)
S=view(obj)
INPUT:
obj: <xml-object>
OUTPUT:
S: <string> with the xml-document
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
view(obj)
See also: xml, xml/save, xml/inspect
ModelitUtilRoot\xml_toolbox\@xml
25-Aug-2010 16:35:30
733 bytes
xml - constructor for an xml-object
CALL:
obj = xml(FileName,isNameSpaceAware,isValidating)
INPUT:
FileName: <string> name of the sourcefile
<string> the xml string
<java-object> with a D(ocument) (O)bject (M)odel
<struct> a Matlab structure
isNameSpaceAware: <boolean> (optional) (default == 1) ignore namespaces
isValidating: <boolean> (optional) (default == 0) validate document
OUTPUT:
obj: <xml-object> with fields:
- DOM: <java object> the DOM object
- file: <string> the name of the xml source
- NS: <java object> a hashmap with namespace
definitions
N.B. obj is empty when an error occurred
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
inspect(obj)
%create an xml from a sourcefile
obj = xml(java.io.File(fullfile(pwd,'examples','books.xml')))
inspect(obj)
%create an xml from a Matlab structure
obj = xml(dir)
inspect(obj)
%create an xml directly from a string
str = '<book category="MATHS"><title lang="en">Nonlinear Programming</title><author>Dimitri P. Bertsekas</author></book>'
obj = xml(str)
inspect(obj)
%create an xml directly from an inputstream
obj = xml(java.io.FileInputStream((fullfile(pwd,'examples','books.xml'))))
inspect(obj)
%create an xml from a sourcefile and validate against a dtd (specified
%in the xml itself
obj = xml(fullfile(pwd,'examples','note_dtd.xml'),0,1)
%create an xml from a sourcefile and validate against a xsd (specified
%in the xml itself
obj = xml(fullfile(pwd,'examples','note_xsd.xml'),1,1)
See also: xml/view, xml/inspect
ModelitUtilRoot\xml_toolbox\@xml
29-Aug-2010 16:33:39
8556 bytes
xml2str - convert the xml-object into a string
Note: This method replaces the method "view"
the method view is kept for backward compability
CALL:
xml2str(obj)
S=xml2str(obj)
INPUT:
obj: <xml-object>
OUTPUT:
S: <string> with the xml-document
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
xml2str(obj)
See also: xml, xml/save, xml/inspect
ModelitUtilRoot\xml_toolbox\@xml
01-Oct-2009 10:40:00
680 bytes
xml2struct - transform xml object to Matlab structure if contents of XML
permit this
CALL
s=xml2struct(obj)
s=xml2struct(obj,NOCELL)
INPUT
obj: XML object
NOCELL: if true, do NOT store data in cells. Defaults to false
OUTPUT
corresponding Matlab structure
NOTE
not all XML documents can be represented as a Matlab strucure if XML
contents do not fit the following error results:
"XML contents do not fit in Matlab structure"
ModelitUtilRoot\xml_toolbox\@xml
03-Jan-2009 13:42:07
3593 bytes
xml - constructor for an xml-object
CALL:
obj = xml(FileName,isNameSpaceAware,isValidating)
INPUT:
FileName: <string> name of the sourcefile
<string> the xml string
<java-object> with a D(ocument) (O)bject (M)odel
<struct> a Matlab structure
isNameSpaceAware: <boolean> (optional) (default == 1) ignore namespaces
isValidating: <boolean> (optional) (default == 0) validate document
OUTPUT:
obj: <xml-object> with fields:
- DOM: <java object> the DOM object
- file: <string> the name of the xml source
- NS: <java object> a hashmap with namespace
definitions
N.B. obj is empty when an error occurred
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
inspect(obj)
%create an xml from a sourcefile
obj = xml(java.io.File(fullfile(pwd,'examples','books.xml')))
inspect(obj)
%create an xml from a Matlab structure
obj = xml(dir)
inspect(obj)
%create an xml directly from a string
str = '<book category="MATHS"><title lang="en">Nonlinear Programming</title><author>Dimitri P. Bertsekas</author></book>'
obj = xml(str)
inspect(obj)
%create an xml directly from an inputstream
obj = xml(java.io.FileInputStream((fullfile(pwd,'examples','books.xml'))))
inspect(obj)
%create an xml from a sourcefile and validate against a dtd (specified
%in the xml itself
obj = xml(fullfile(pwd,'examples','note_dtd.xml'),0,1)
%create an xml from a sourcefile and validate against a xsd (specified
%in the xml itself
obj = xml(fullfile(pwd,'examples','note_xsd.xml'),1,1)
See also: xml/view, xml/inspect
ModelitUtilRoot\xml_toolbox\@xml
19-Dec-2008 15:26:59
7454 bytes
xpath - carry out a set or get for an xml-object using xpath syntax
CALL:
S = xpath(obj,ind)
S = xpath(obj,ind,data)
INPUT:
obj: <xml-object>
ind: <struct array> with fields
- type: one of '.' or '()'
- subs: subscript values (field name or cell array
of index vectors)
<string> with an xpath expression
data: (optional) with the values to be put in the by ind defined
fields in the xml-object, allowed types:
- <struct> matlab structure
- <xml-object>
- <org.apache.xerces.dom.ElementImpl>
OUTPUT:
S: <cell array> in nargin == 2 (get is used)
<xml-object> if nargin == 3 (set is used)
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','books.xml'))
%select the book with title 'Harry Potter'
book = xpath(obj,'parent::/bookstore/book[title="Harry Potter"')
inspect(book{1})
See also: xml, xml/set, xml/get, xml/subsref, xml/subsasgn,
xml/private/buildXpath
ModelitUtilRoot\xml_toolbox\@xml
26-Jun-2008 10:45:40
9923 bytes
xslt - transform the xml-object to html by using a stylesheet
CALL:
HTMLstring = xslt(obj,xsl,fileName)
INPUT:
obj: <xml-object>
xsl: <string> filename of the stylesheet
fileName: <string> (optional) the name of the file to which the HTML has
to be saved
OUTPUT:
HTMLstring: <string> to HTML transformed XML string
EXAMPLE:
%create an xml from a sourcefile
obj = xml(fullfile(pwd,'examples','cd_catalog.xml'))
HTMLstring = xslt(obj,fullfile(pwd,'examples','cd_catalog.xsl'))
%display in browser
web(['text://' HTMLstring]);
See also: xml, xml/save, web, xslt
ModelitUtilRoot\xml_toolbox\@xml
13-Jun-2006 14:18:28
1705 bytes
buildXPath - create an XPath object for an XML DOMtree
CALL:
x = buildXPath(string,nsStruct)
INPUT:
string: <string> XPath expression
Namespaces: <java object> (optional) a java.util.HashMap with namespace
definitions
OUTPUT:
x: <java object> org.jaxen.dom.DOMXPath
See also: xml, xml/xpath, xml/subsasgn, xml/subsref
ModelitUtilRoot\xml_toolbox\@xml\private
08-Jun-2006 08:00:24
1223 bytes
chararray2char - convert char array to string CALL: str = chararray2char(str) INPUT: str: <char array> OUTPUT: str: <string> See also: xml, xml/xpath, xml/subsasgn, xml/private/toString
ModelitUtilRoot\xml_toolbox\@xml\private
08-Jun-2006 08:50:54
409 bytes
emptyDocument - create an empty Document Object Model (DOM) (only the
root node is present)
CALL:
document = emptyDocument(root)
INPUT:
root: (optional) <string> name of the root node
<org.apache.xerces.dom.ElementImpl>
<org.apache.xerces.dom.ElementNSImpl>
default == 'root'
OUTPUT:
document: <java-object> org.apache.xerces.dom.DocumentImpl
See also: xml, xml/view, xml/inspect
ModelitUtilRoot\xml_toolbox\@xml\private
02-Jun-2006 08:10:50
1154 bytes
fieldInfo - determine the number and names of the direct children of the
root node and the name of the root node
CALL:
S = fieldInfo(obj)
INPUT:
obj: <xml-object>
OUTPUT:
S: <struct> with fields
- root : <string> name of the root node
- children : <struct> with fields
- name: <string> names of the direct children of
the root node
- frequency: <int> number of time a certain node
appears
See also: xml, xml/display
ModelitUtilRoot\xml_toolbox\@xml\private
08-Jun-2006 08:36:56
1322 bytes
ind2xpath - convert a matlab substruct into an xpath string
CALL:
xpathstr = ind2xpath(ind)
INPUT:
ind: <struct> see substruct
OUTPUT:
xpathstr: <string> xpath equivalent of the substruct
See also: xml, xml/private/buildXpath, xml/subsasgn, xml/subsref,
xml/xpath, substruct
ModelitUtilRoot\xml_toolbox\@xml\private
08-Jun-2006 08:28:44
1042 bytes
struct2hash - convert a matlab structure into a java hashmap
CALL:
H = struct2hash(S,H)
INPUT:
S: <struct> fieldnames --> hashmap keys
values --> hashmap entries
H: <java object> (optional) java.util.HashMap
OUTPUT:
H: <java object> java.util.HashMap
See also: xml, xml/private/buildXpath
ModelitUtilRoot\xml_toolbox\@xml\private
08-Jun-2006 08:28:36
577 bytes
sub2ind - convert a string into a struct array of type substruct, for
indexing into xml documents as if they were Matlab structures
CALL:
ind = sub2ind(S)
INPUT:
S: <string> index into xml document (same format as indexing into
Matlab structures) e.g. 'book(1)' or 'book(1).title' result
in the same substructs as would be obtained if S.book(1) or
S.book(1).title were used (S a Matlab structure).
OUTPUT:
ind: <struct array> with fields:
- type -> subscript types '.', '()', or '{}'
- subs -> actual subscript values (field names or
cell arrays of index vectors)
EXAMPLE:
ind = sub2ind('book(1)')
See also: xml, xml/isfield, xml/rmfield, substruct
ModelitUtilRoot\xml_toolbox\@xml\private
08-Jun-2006 08:29:08
1223 bytes
toString - convert java object, cellstring or char array to string
CALL:
S = toString(S)
INPUT:
S: <cell string>
<char array>
<java object>
OUTPUT:
S: <string>
See also: xml, xml/xpath, xml/subsasgn
ModelitUtilRoot\xml_toolbox\@xml\private
08-Jun-2006 08:44:10
653 bytes
ChainRule - kettingregel voor differentieren
CALL:
transform = ChainRule(transform1,transform2)
INPUT:
transform1: <struct>
- type
- linear: tijdsinvariant lineaire transformatie: y=A*x
- diagonal: tijdsafhankelijke transformatie, alleen
gradient van de transformatie gespecificeerd
aantal inputs is gelijk aan aantal output,
transformatie matrix is diagonaal
- full: tijdsafhankelijke transformatie, alle
argumenten gespecificeerd
- M: transformatie matrix. vorm hangt van type af
- Ninput: aantal inputs hoog
- Noutput: aantal outputs hoog
transform2: <struct> zie transform1
OUTPUT:
transform: <struct> zie transform1
ApplicationRoot\wavixIV\CONHOP
03-Nov-2006 10:52:52
8619 bytes
EstimateConhop3 - Schat de reeksen van de hoofdsensoren bij m.b.v. de
Conhop operator, callback van het databeheer scherm
CALL:
db = EstimateConhop3(obj, event, opt, db, IDsPredict)
INPUT:
obj: <handle> van de 'calling' uicontrol
event: leeg, standaard argument van een callback
opt: <struct> berekeningsopties
- improveInit: (1/0) verbeter initiele schatting
- fastRepair: (1/0) toepassen fast repair
- fastRepairVal: (N ) voer N iteraties uit
- optim: (1/0) pas conhop optimalisatie toe
- estimateNeven (1/0) schatten op basis van Neven sensoren
- estimateall (1/0) schatten alles
db: <struct> de centrale database
IDsPredict: <vector> (optioneel) met indices van de te voorspellen
reeksen, default is alles bijschatten
OUTPUT:
db: <struct> de centrale database, met de geschatte
hoofdsensoren in de V-velden
ApplicationRoot\wavixIV\CONHOP
10-Mar-2009 20:00:56
37963 bytes
NN_depend - bepaal welke reeksen nodig zijn om de neurale netwerken door
te kunnen rekenen
CALL:
[IDsDepend,IDsMissing,doubleIDS,WkeyMissing,IDs2NN_indx] = NN_depend(db,NN_name,IDs)
INPUT:
db: <struct> de centrale database
NN_name: <string> met de te gebruiken netwerken
- 'netwerk' voor hoofdnetwerken
- 'convnetwerk' voor conversienetwerken
IDs: <vector> met indices van de bij te schatten reeksen
OUTPUT:
IDsDepend: <vector> met de IDs van de benodigde invoer reeksen
(voor zover gevonden)
IDsMissing: <vector> met de IDs van de te voorspellen reeksen waarbij
geen neuraal netwerk gevonden kon worden
doubleIDS: <struct> met de sleutels behorend bij IDs die op meer dan 1
manier berekend kunnen worden. met velden
- sLoccod
- sParcod
- sVatcod
WkeyMissing: <struct> met de sleutels van benodigde invoerreeksen die
niet aanwezig zijn in het werkgebied. met velden
- sLoccod
- sParcod
- sVatcod
IDs2NN_indx: <vector> met de de indices van de te gebruiken netwerken
die corresporen met IDs (0 op de plaatsen van niet gevonden
reeksen)
See also: selectPredictable
ApplicationRoot\wavixIV\CONHOP
29-Nov-2006 11:13:26
3946 bytes
SimulNN - simultane toepassing neurale netwerken
CALL:
[W_est,stdW_est,JacW] = SimulNN(W,stdW,f_required,f_3,NN_data)
INPUT:
W: <matrix> met data (aantal perioden bij aantal reeksen)
stdW: <matrix> met stdafw (zelfde grootte als W)
f_required: <lineaire index> de elementen die herberekend moeten worden
f_3: <index> de gevraagde KOLOMMEN uit de Jacobiaan, dat zijn
de vrij te varieren variabelen
NN_data: <struct> read only met gegevens voor de neurale netwerken
met velden:
- Wkey: uit M.Wkey
- NetworkStructObj: een netwerkobject
- SensorIndx: de hoofdsensoren
- diatijd: uit M.diatijd
- DiaIndx: uit M.DiaIndx
OUTPUT:
W_est: <vector> met geschatte waarden
(hoogte=lengte(f_required))
stdW : <vector> met bijbehorende standaard afwijkingen
(hoogte=lengte(f_required))
JacW: <matrix> met gedeelte van de Jacobiaan
(hoogte=lengte(f_required) breedte=lengte(f_3))
ApplicationRoot\wavixIV\CONHOP
06-Aug-2007 17:21:12
15768 bytes
SimulateNeuralNetwork2 - Simuleer het neurale netwerk in NetworkStruct
CALL:
[output,sigma,message,jaco] = SimulateNeuralNetwork2(W,stdW,Wkey,NetworkStruct,periodeIndex)
INPUT:
W: <matrix> aantal periodes bij aantal benodigde reeksen
voor de neurale netwerken, met geobserveerde waarden
stdW: <matrix> aantal periodes bij aantal benodigde reeksen
voor de neurale netwerken, met betrouwbaarheden
Wkey <struct> bijbehorende sleutels (ZIE sleutel2struct)
lengte is aantal benodigde reeksen, met velden:
- sLoccod: <string>
- sParcod: <string>
- sVatcod: <string>
NetworkStruct <struct> structure met o.a. een veld netwerk
met een netwerk structuur
periodeIndex <vector> (optioneel) index van de te rijen (periodes),
wordt gebruikt met conhop
OUTPUT:
output <vector> van lengte periodeIndex met voorspelde waarden
sigma <vector> van lengte periodeIndex met voorspelde standaarddeviaties
message <string> met eventuele boodschap (nog niet gebruikt)
jaco <sparse matrix> hoogte=#periodeIndex;
breedte=prod(size(W)), de Jacobiaan
ApplicationRoot\wavixIV\CONHOP
20-Mar-2007 21:35:48
12903 bytes
TestVars - test of de variabelen in NetworkStruct.data aanwezig zijn in
de database
CALL:
result = TestVars(M,NetworkStruct,mode)
INPUT:
M: <struct> de kopie van de centrale database (via db2mat)
NetworkStruct: <struct> met relevante velden
- NetworkStruct.data.invoer en
- NetworkStruct.data.uitvoer
mode: <string> mogelijke waarden
- 'training'
- 'simulation'
OUTPUT:
result: <cellstring> met de namen van de elementen die niet
in de database zitten, leeg als alles ok
ApplicationRoot\wavixIV\CONHOP
24-Oct-2006 17:33:04
3791 bytes
conhopobjfun2 - de doelfunctie voor de Consistency Measure
CALL:
[f,g,H] = conhopobjfun2(x,funpars)
INPUT:
x : de te varieren variabele
funpars : cell array met parameters, in de volgende volgorde:
M : een kopie van het werkgebied
SensorIndx : SensorIndx(i) hoort bij het netwerk met index i
* geeft aan welke kolom in matrix M voorspeld wordt
* correspondeert met NetworkStructObj
NetworkStruct : struct array met neurale netwerken
I_hiaat : Lineaire index naar de hiaten (De index van x!!)
I_affected : Indices van de elementen die door I_hiaat worden
beinvloed. Cell-array correspondeert met NetworkStruct
I_jacaffected : geeft aan welke hiaten verantwoordelijk
zijn voor beinvloeding, correspondeert met I_affected, uit
I_affected kan de reeks en tijdstip gehaald worden waarop
I_jacaffected van toepassing is
OUTPUT
x : <vector> punt waarop de doelfunctie geevalueerd moet worden
funpars: <cell array> met inhoud:
W : vector met waarnemingen
stdW : vector met waarnemingsfouten.
Let op!! de elementen f_3 zijn hierin al op nul gezet.
NN_data: de neurale netwerk gegevens
f_3 : indices van vrij te varieren waarden (lineaire index in W)==I_wederzijds
f_4 : indices van door f_3 beinvloede waarden (lineaire index in W)
E4tE3 : Het resultaat van een vermenigvuldiging van E4'*E3
E4tE1W : Het resultaat van een vermenigvuldiging van E4'*E1*W
OUTPUT:
f: <double> de doelfunctiewaarde, scalair
g: <vector> de gradient, length(x) lang
H: <matrix> de hessiaan (Jac'*Jac), length(x) bij length(x)
ApplicationRoot\wavixIV\CONHOP
30-Sep-2005 17:56:56
6455 bytes
dampnewton - Levenberg-Marquardt type damped Newton method for nonlinear
optimization
CALL:
[running_time,x,f,g,H] = dampnewton(fun,par,x,options)
INPUT:
fun: <function handle> moet gedefinieerd zijn als:
[f,g,H] = fun(x,par)
par: eventuele parameters voor fun, mag leeg zijn
x0: <vector> startpunt
options: <struct> met de volgende velden:
- mu : <double> startwaarde voor de Marquardt
parameter.
- epsilon1 : <double> ||g||_inf <= epsilon1
- epsilon2 : <double> ||dx||2 <= epsilon2*(epsilon2 + ||x||2)
- maxiter : <int> maximum aantal iteraties
OUTPUT:
x: <vector> optimale waarde voor de variabelen
f: <double> functiewaarde
g: <vector> gradient
H: <matrix> length(x) bij length(x) Hessiaan
APPROACH:
- Section 5.2 in P.E. Frandsen, K. Jonasson, H.B. Nielsen,
O. Tingleff: "Unconstrained Optimization", IMM, DTU. 1999.
- "damping parameter in marquardt's method"
Hans Bruun Nielsen, IMM, DTU. 99.08.10 / 08.12
ApplicationRoot\wavixIV\CONHOP
13-Feb-2009 13:52:22
12476 bytes
dispdump - display meldingen en sla deze op voor rapportage CALL: strs = dispdump(strs,str) INPUT: strs: <cellstr> met de reeds aanwezige meldingen str: <string> met de toe te voegen melding OUTPUT: strs: <cellstr> geupdated cellstring met meldingen
ApplicationRoot\wavixIV\CONHOP
23-Dec-2004 04:56:58
368 bytes
matgetvar2 - genereer de reeks(en) (W en stdW) voor een opgegeven locatie
variabele veldapparaat tijdstip(verschuivingen) combinatie
vanuit de matrix die gemaakt is met db2mat
CALL:
[W_sel,stdW_sel,index,stdW_seq,T1,T2] = matgetvar2(W,stdW,Wkey,WTkey,periodeIndex)
INPUT:
W: <matrix> met meetdata
stdW: <matrix> met meetfouten
Wkey: <struct> bijbehorende sleutels (ZIE sleutel2struct)
lengte is aantal benodigde reeksen, met velden:
- sLoccod: <string>
- sParcod: <string>
- sVatcod: <string>
WTkey: <struct> sleutels van op te halen reeksen
(ZIE parseNNInvoer) met velden
- sLoccod: char str
- sParcod: char str
- sVatcod: char str
- tShift: integer
periodeIndex: <vector> met de indices van de te berekenen periodes
OUTPUT:
W_sel: <array> de waarden voor de loc var veldapp tijd combinatie
stdW_sel: <array> de deviaties voor de loc var veldapp tijd combinatie
index: <int> de index van de dia die hoort bij de loc var veldapp
tijd combinatie
stdW_seq: <matrix> standaarddeviaties met de richtingen NIET ontbonden
T1: <struct> lineaire transformatiematrix
T2: <struct> diagonale transformatie
METHODE:
als var == 'WINDRTG', 'Th0' of 'Th3' dan wordt var opgesplitst in een
x- en y-richting
Toegepaste transformaties:
T1: blaas alle richting reeksen op tot 2 identieke exemplaren (lineaire transformatie)
T2: laat reeksen ongemoeid of neem sinus en of cosinus (diagonale transformatie)
ApplicationRoot\wavixIV\CONHOP
20-Mar-2007 21:35:30
5451 bytes
selectPredictable - filter IDsPredict van reeksen die mogen worden
bijgeschat
CALL:
[IDsPredict,report] = selectPredictable(db,NN_name,IDsPredict)
INPUT:
db: <struct> de centrale database
NN_name: <string> naam van het struct array dat de NN herbergt
IDsPredict: <vector> indices van de te voorspellen reeksen
OUTPUT:
IDsPredict: <vector> indices van de reeksen waarvoor geldt:
- Alleen reeksen waarvoor een NN aanwezig is worden
voorspeld
- Alleen Neurale netwerken waarvoor alle invoer reeksen
aanwezig zijn mogen worden gebruikt.
report: <string> bijdrage aan logboek
See also: NN_depend
ApplicationRoot\wavixIV\CONHOP
29-Nov-2006 11:12:02
3912 bytes
simstructnet - simuleer met een netwerk in structuurformaat
CALL:
[result,T] = simstructnet2(netstruct,inputdata)
INPUT:
netstruct - <struct> zie emptystruct('netwerk')
inputdata(M,P) - <matrix> aantal inputs bij aantal beschikbare patronen
OUTPUT:
result(H,P) - <matrix> (aantal outputs maal aantal patronen) bij
aantal members
T(MxH,P) - <matrix> met invloed van invoer(I) op uitvoer(U)
(I1->U1,I2->U1,... I1->U2,I2->U2,...etc)
(doorgaans is er maar 1 output en geldt H==1)
METHODE:
deze functie is in principe gelijk aan sim van de neural network
toolbox, met het verschil dat
1) alleen het resultaat van de simulatie wordt teruggegeven
2) er gewerkt wordt met een structuur en niet met een netwerk object
alle gegevens zijn te vinden in
netstruct.ensemble.member: met de bias en gewichten
netstruct.netwerk: met de netwerkstructuur: aantalneuronen
transferfuncties aantal lagen etc.
3) deze routine werkt alleen voor feedforward netwerken
ApplicationRoot\wavixIV\CONHOP
30-Oct-2006 12:58:28
12911 bytes
start_conhop - GUI voor opstarten van conhop
CALL:
opt = start_conhop
INPUT:
geen invoer
OUTPUT:
opt: <struct> met de opties voor het uitvoeren van de conhop
schattingen: met velden
- improveInit: initiele schatting m.b.v. neurale netwerken
toepassen
- fastRepair: <0 of 1> iteratief bijschatten wederzijdse
hiaten
- fastRepairVal: <int> aantal keer uitvoeren fastrepair
- optim: <0 of 1> optimaliseren wederzijdse hiaten
- dampnewton: <struct> met de opties voor de optimalisatie
- mu: <double> startwaarde voor de trustregion
parameter
- epsilon1: <double> convergentieparameter voor de
gradient
- epsilon2: <double> convergentieparameter voor de
stapgrootte
- maxiter: <int> maximum aantal iteraties
See also: Estimate
ApplicationRoot\wavixIV\CONHOP
13-Feb-2009 13:52:52
13747 bytes
RemoveDiablok - verwijder reeksen uit het werkgebied
CALL:
RemoveDiablok(rmvIDs,db,C,msg)
INPUT:
rmvIDs : <vector> met indices van de te verwijderen reeksen
db: <struct> de centrale database
C: <struct> met constantes
msg: <string> tekst voor het logboek
OUTPUT:
geen directe uitvoer, de database wordt aangepast
ApplicationRoot\wavixIV\DATABEHEER
26-Oct-2006 18:59:56
1359 bytes
SelectLocation - gui voor het selecteren van een reeks als hoofdsensor
bij een andere locatie
CALL:
locindx = SelectLocation(C,loc,dia)
INPUT:
C: <struct> met constantes
loc: <struct> het db.loc veld van de centrale database
dia: <struct> de reeks die gekoppeld moet worden aan een andere
locatie
OUTPUT:
locindx: <int> index in het db.loc.sLoccod veld van de centrale
database
ApplicationRoot\wavixIV\DATABEHEER
15-Oct-2008 12:43:22
4216 bytes
WavixDia2Blok - converteer Wavix element "dia" naar Donar "blok" element
CALL:
blok = WavixDia2Blok(dia)
INPUT:
dia: <struct array> met dia's
OUTPUT:
blok: <struct> van blokken die in een dia kunnen worden weggeschreven
METHODE
Deze functie wordt aangeroepen in de volgende situaties:
- wanneer twee Dia's worden samengevoegd. De wavix datastructuur
wordt dan tijdelijk omgezet in een Donar structuur. Hierdoor wordt
het mogelijk om de algemene utility "dia_merge" te benutten.
(procedure do_import_dia.m)
- wanneer een reeks wordt geconverteerd. Dit gebeurt door de een
nieuwe reeks aan te maken en te importeren met do_import_dia
- alle overige plaatsen waar do_import_dia vanuit Wavix wordt
aangeroepen.
ApplicationRoot\wavixIV\DATABEHEER
28-Jan-2007 22:49:42
1252 bytes
check_Hm0 - voer een consistentie check uit: vergelijk hiaten in reeks
met hiaten in corresponderende Hm0 reeksen
CALL:
[warnmsg,db] = check_Hm0(db)
INPUT:
db: <undoredo object> de centrale database
OUTPUT:
warnmsg: <string> met een eventuele waarschuwing, '' als alles ok
db: <struct> de centrale database waarin de reeksen met hiaten
in de corresponderende Hm0 reeks op hiaat gezet zijn
ApplicationRoot\wavixIV\DATABEHEER
15-Oct-2008 12:46:54
3266 bytes
check_Hm0_1 - hiaatstatus aanpassen voor 1 reeks
CALL:
[db, reportstr, missingHm0] = check_Hm0_1(db, C, indx, W3Hs, msg)
INPUT:
db: undoredo object met de centrale Wavix database
C: structure met constantes
indx: index van de te controleren dia
W3Hs: struct array met alle W3H blokken uit het wgb
msg: string voor item in undoredo lijst
OUTPUT:
db: undoredo object met de centrale Wavix database
reportstr: string met eventuele waarschuwingen over ontbreken Hm0 of
aantal hiaten in Hm0
missingHm0: string met stations waarvoor Hm0 mist
ApplicationRoot\wavixIV\DATABEHEER
18-Oct-2007 18:44:40
3241 bytes
cmp_stdafw - bereken de standaardafwijking van alle reeksen in het
werkgebied
CALL:
[msg,db] = cmp_stdafw(db)
INPUT:
db: <struct> de centrale database
OUTPUT:
msg <string>
db: <struct> de centrale database met het veld stdV gevuld voor alle
aanwezige reeksen
See also: ComputeStd
ApplicationRoot\wavixIV\DATABEHEER
15-Oct-2008 12:48:52
1077 bytes
databeheer - installeer de databeheer GUI
CALL:
databeheer(obj,event)
INPUT:
obj: <handle> van de 'calling' uicontrol, (wordt niet gebruikt)
event: leeg, standaard argument van een callback (wordt niet gebruikt)
OUTPUT:
geen directe uitvoer, het databeheer scherm wordt geopend
APPROACH:
Deze functie kijkt of het databeheer scherm al is geinstalleerd en
maakt het in dat geval current.
Zo niet, dan wordt het databeheer scherm geinitialiseerd.
Deze functie module bevat alle define- functies waarmee het scherm
wordt opgebouwd, en de meeste van de callback functies die vanuit het
scherm kunnen worden aangeroepen.
See also: dbhview
ApplicationRoot\wavixIV\DATABEHEER
15-Oct-2008 13:08:58
34920 bytes
databeheerview - view functie voor het databeheer scherm
CALL:
databeheerview(udnew,opt,upd,C,HWIN)
INPUT:
udnew: <struct> de centrale database
opt: <struct> GUI settings voor databeheer
upd: <struct> de te updaten scherm elementen
C: <struct> de wavix constantes
HWIN: <handle> van het databeheer scherm
OUTPUT:
geen directe output, het databeheer scherm is geupdate
See also:
databeheer
data2dbh - definieer afhankelijkheid databeheer scherm
van data
settings2dbh - definieer afhankelijkheid databeheer scherm
van settings (opties)
ApplicationRoot\wavixIV\DATABEHEER
15-Oct-2008 12:48:32
7143 bytes
dealwithdiablok - verwijder, selecteer of deselecteer gemarkeerde reeksen
CALL:
dealwithdiablok(obj,event,hlist,optie)
INPUT:
obj: <handle> van de 'calling' uicontrol
event: leeg, standaard argument van een callback
hlist: <handle> van listbox
optie: <string> uit te voeren actie
- savedia, opslaan geselecteerd reeks(en)
- O, zet status geselecteerd reeks(en) op Ongecontroleerd
- G, zet status geselecteerd reeks(en) op Goedgekeurd
- D, zet status geselecteerd reeks(en) op Definitief
- delete, wis geselecteerde reeks(en)
- hoofd, wijs geselecteerde reeks aan als hoofdsensor
- hoofdspecial, wijs geselecteerde reeks aan als
hoofdsensor bij andere locatie
- neven, schrap de geselecteerde reeksen als hoofdsensor
- TE3_2_HTE3, converteer TE3/2 naar HTE3 als mogelijk
- HTE3_2_TE3, converteer HTE3/2 naar TE3 als mogelijk
- exportascii, exporteer geselecteerde reeksen naar kaal
ascii bestand
OUTPUT:
geen directe uitvoer
METHODE:
Wordt aangeroepen uit contextmenu van lijst of button
Conventie 1: de lijst "listobj" heeft als userdata de reeks ID's
Conventie 2: de gemarkeerde items in lijst "listobj" dienen gewijzigd
te worden
ApplicationRoot\wavixIV\DATABEHEER
15-Oct-2008 12:38:08
9593 bytes
defaultconfig - selecteer reeksen als hoofdsensoren
CALL:
[db,reportstr] = defaultconfig(db,C,mode,dia,WavixLoc)
INPUT:
db: <struct> de centrale database
C: <struct> met constantes
actie: <string> uit te voeren actie:
- remove: verwijder de locaties uit array dia uit
locatietabel
- add: voeg de locaties uit array dia toe aan de
locatietabel
- addascii: voeg de locaties uit array dia toe aan de
locatietabel voor zover deze locaties
voorkomen in een stuurfile
dia: <structarray> met dia's
WavixLoc: <structarray> (optioneel) die correspondeert met dia. Bevat
het veld wavixloc: wavix locatie waarvoor de reeks geldt
OUTPUT:
db: <struct> de centrale database met aangepaste velden:
- db.loc.ID: identifier
- db.loc.ID: sLoccod: naam van deze locatie
- db.loc.(H1_3/Hm0/TE3/TH1_3/Th0/Tm02): ID van reeks
die als hoofdsensor fungeert
reportstr: <string> weg te schrijven logboek aantekeningen
ApplicationRoot\wavixIV\DATABEHEER
29-Jul-2008 21:57:18
25078 bytes
do_import_conversie_network - importeer netwerken voor het bijschatten
van hoofdsensoren m.b.v. de nevensensoren
naar het werkgebied
CALL:
u = do_import_conversie_network(C,fname,NetworkArray,u)
INPUT:
C: <struct> met de wavix constantes
fname: <string> met de naam van het te importeren bestand
NetworkArray: <array of struct> van netwerken zie
emptystruct('netwerk')
u: <struct> de centrale database
OUTPUT:
u: <struct> de centrale database met de lijst met conversie
netwerken aangepast,
N.B. deze netwerken zijn in tegenstelling tot de
netwerken die geimporteerd zijn met do_import_network
niet te zijn in de netwerkenlijst in het netwerkbeheer
scherm
ApplicationRoot\wavixIV\DATABEHEER
01-Oct-2007 10:12:02
2364 bytes
do_import_dia - voer de import actie voor een dia uit
CALL:
db = do_import_dia(C,fname,blok,db)
INPUT:
C: <struct> met wavix constantes
fname: <string> met de bestandsnaam van de te importeren dia
blok: <struct array> (optioneel) met dia blokken met velden:
W3H
MUX
TYP
RKS
TPS
WRD <=== Volgens DONAR datastructuur
db: <struct> de centrale database
h_wait: <jacontrol object> (optioneel) type jprogressbar voor weergave
voortgang, default wordt er een multiwaitbar
aangemaakt
do_interp: <boolean> (optioneel) true -> interpoleer blok naar tijdsas
en complementeer wind en waterhoogte
OUTPUT:
db: <struct> de bijgewerkte centrale database met de nieuwe reeks(en)
See also: databeheer, load_wavixascii
ApplicationRoot\wavixIV\DATABEHEER
15-Oct-2008 11:18:52
11037 bytes
exportascii - GUI voor het exporteren van reeksen in kaal ascii formaat CALL: exportascii(C,db,indx) INPUT: C: <struct> met constantes db: <struct> de centrale database indx: <vector> indices van de te exporteren reeksen OUTPUT: geen directe uitvoer, de data zijn weggeschreven naar een ascii-bestand
ApplicationRoot\wavixIV\DATABEHEER
13-Feb-2009 13:53:04
17738 bytes
extend_time - breid tijdsinterval van de reeksen in de database uit
CALL:
db = extend_time(db,indx,M)
INPUT:
db: <struct> de centrale database
indx: <vector> met indices van reeksen
(correspondeert met kolom index in M)
M : <struct> de database in matrixvorm (zie db2mat)
OUTPUT:
db: <struct> de centrale database met de volgende bijgewerkte velden:
- db.dia.W1
- db.dia.stdW
- db.dia.V
- db.dia.stdV
- db.dia.status
- RKS
- TPS
See also: set_hiaat, limit_time
ApplicationRoot\wavixIV\DATABEHEER
24-Sep-2007 20:26:18
1552 bytes
limit_time - perk tijdinterval van reeksen in database in
CALL:
[db, reportstr] = limit_time(db, C, indx, taxis)
INPUT:
db: <struct> de centrale database
C: <struct> met wavixconstanten
indx: <vector> met indices van reeksen
taxis: <array of datenum> van geselecteerde tijdsas
OUTPUT:
db: <struct> de centrale database met de volgende bijgewerkte velden:
- db.dia.W1
- db.dia.stdW
- db.dia.V
- db.dia.stdV
- db.dia.status
- RKS
- TPS
reportstr: <string> commentaar voor het logboek
See also: set_hiaat, extend_time
ApplicationRoot\wavixIV\DATABEHEER
24-Sep-2007 20:27:38
2599 bytes
listRKS - vul een struct array van RKS structures op basis
van een WAVIX dia array
CALL:
RKSs = listRKS(dia,indices)
INPUT:
dia: <struct array> met dia's (zie emptystruct('dia'))
indices: <vector> (optioneel) te gebruiken indices (default: alle)
OUTPUT:
RKSs: <struct array> van het RKS gedeelte van een dia
ZIE OOK:
listW3H
ApplicationRoot\wavixIV\DATABEHEER
23-Dec-2004 08:57:06
722 bytes
select_interval - gui voor het selecteren van een tijdsinterval voor het
uitbreiden of inperken van het tijdsinterval van het
werkgebied
CALL:
[begintijd,eindtijd,uitbreiden] = select_interval(begintijd,eindtijd)
INPUT:
begintijd: <datenum> originele begintijd, wordt getoond in gui
eindtijd: <datenum> originele eindtijd, wordt getoond in gui
OUTPUT:
begintijd: <datenum> nieuwe begintijd
eindtijd: <datenum> nieuwe eindtijd
uitbreiden: <int> uitkomst van checkbox uitbreiden dias tot
compleet interval, mogelijke waarden:
- 0, inperken studieperiode
- 1, uitbreiden studieperiode
ApplicationRoot\wavixIV\DATABEHEER
15-Oct-2008 12:39:56
6575 bytes
set_hiaat - markeer geselecteerde punten als hiaat
CALL:
db = set_hiaat(optie,db,indx,f_hiaat,msg)
INPUT:
optie: <string> bij te werken veld:
W: waarde veld
V: voorspelling (nog niet in gebruik)
db: <struct> de centrale database
indx: <vector> van indices in WGB dia lijst
f_hiaat: <vector> indices van de te markeren hiaten
msg: <string> message voor undo lijstweergave
OUTPUT:
db: <undoredo object> met de centrale database met de hiaten
ApplicationRoot\wavixIV\DATABEHEER
18-Oct-2007 18:44:22
1354 bytes
updatetoestand - voer automatische acties uit als een toestandsovergang
plaatsvindt
CALL:
[db,msg] = updatetoestand(db,knop_main,knop_sub)
INPUT:
db: <struct> de centrale database
knop_main: <string> de toestand waartoe de geselecteerde button
behoort
knop_sub: <string> de subtoestand waartoe de geselecteerde button
behoort
OUTPUT:
db: <struct> de updated centrale database
msg: <string> eventuele foutmelding
ApplicationRoot\wavixIV\DATABEHEER
15-Oct-2008 12:46:32
13260 bytes
Estimate - schat de reeksen van de hoofdsensoren
CALL:
Estimate(obj, event, mode, opt)
INPUT:
obj: <handle> van de 'calling' uicontrol, (wordt niet gebruikt)
event: leeg, standaard argument van een callback (wordt niet gebruikt)
mode: <string> met de uit te voeren actie:
- 'Init', maak schattingen voor alle hiaatwaarden m.b.v.
neurale netwerken
- 'Neven', schat de reeksen van de hoofdsensoren bij
m.b.v de conversie netwerken
- 'All', schat alle reeksen van de hoofdsensoren bij
m.b.v. neurale netwerken
- 'Conhop', Optimaliseer wederzijdse hiaten
- 'selectie', schat enkele reeksen van de hoofdsensoren
bij m.b.v. neurale netwerken
opt: <struct> meegegeven opties vanuit start_conhop
OUTPUT:
geen directe uitvoer, de schattingen worden opgeslagen in de centrale
database, en zichtbaar gemaakt in de grafieken in het Wavix hoofdscherm
See also: start_conhop
ApplicationRoot\wavixIV\HOOFDSCHERM
10-Mar-2009 19:25:34
3852 bytes
GetColSpecsDefinition - bouw sleutels voor het selecteren van de kolommen
CALL:
[ColSpecHoofd,ColSpecNeven,groot2klein,Parameters] = GetColSpecsDefinition
INPUT:
geen invoer
OUTPUT:
ColSpecHoofd: <array of struct> waarvan de lengte overeenkomt
met het aantal WAVIX kolommen -1. Dit array bevat de
sleutels en conversie instructies voor de WAVIX tabel
met HOOFDsensoren.
De structures van dit array hebben de volgende velden:
sLoccod: <cell array> met primaire, secundaire, etc sleutel voor
locatie voor andere kolommen dan die voor windgegevens is
momenteel alleen de primaire sleutel bepaald. Er zijn
maximaal 6 sleutels. De sleutel 'NB' wordt in dia2wavix
als speciale waarde behandeld.
sParcod: <string> met de sleutel voor parameter type.
sVatcod: <cell array> met primaire, secundaire, etc sleutel voor
detectortype voor andere kolommen dan die voor
windgegevens is momenteel alleen de primaire sleutel
bepaald. Er zijn maximaal 6 sleutels. De sleutel 'NB'
wordt in dia2wavix als speciale waarde behandeld.
De lengte van dit cell array moet overeenkomen met het
aantal sleutels voor locaties.
verplicht: code voor het type waarschuwing bij een niet
gevonden blok
factor: Ophoogfactor voor Dia2Wavix (bijvoorbeeld voor het
geval dat DONAR een andere eenheid gebruikt dan
WAVIX)
verschil: Vaste ophoging
bewerking: Veld dat een bepaalde bewerking karakteriseert.
Momenteel zijn de volgende bewerkingen ondersteund:
1 ===> Conversie van TE3 naar TE10 (4*sqrt)
2 ===> Bijgissen van hiaten door middel van lineaire
interpolatie
ColSpecNeven: <array of struct> waarvan de lengte overeenkomt met het
aantal WAVIX kolommen -1. Dit array bevat de sleutels en
conversie instructies voor de WAVIX tabel met
NEVENsensoren.
groot2klein: Een lijst corresponderend met de lijst van
parametersoorten, met voor elke parameter de locatie van
de WAVIX kolom voor deze parameter in de repeterende
blokken van de WAVIX tabel
Parameters: <struct> met dias met voor elke parameter een blok dat de
Metagegevens bevat. De blokken staan in de structure in
dezelfde volgorde als dat ze voorkomen in de WAVIX tabel.
ApplicationRoot\wavixIV\HOOFDSCHERM
27-Dec-2004 11:45:52
17315 bytes
do_apply - voer bewerkingen uit op de geselecteerde periodes
CALL:
function do_apply(obj,event,mode)
INPUT:
obj: <handle> van de 'calling' uicontrol
event: leeg, standaard argument van een callback
mode: <string> bewerking
- ok: keur de huidige waarde voor de
geselecteerde periode goed
- estimate: pas geschatte waarde voor geselecteerde
periodes toe
- hiaat: zet de geselecteerde periode op hiaat
OUTPUT:
geen directe uitvoer, de centrale database is aangepast
ApplicationRoot\wavixIV\HOOFDSCHERM
02-Nov-2007 16:40:24
8581 bytes
emptyu - initialiseer undoredo object voor wavix
CALL:
[db,filename] = emptyu(C,filename,signature)
INPUT:
C: <struct> met constantes voor de stormnet applicatie
filename: <string> met de naam van het te openen bestand
signature: <double> (optioneel) met signaturen van het undoredo object
OUTPUT:
db: <undoredo object> met een 'leeg' werkgebied.
filename: <string> met de bestandsnaam van het werkgebied
See also: emptystruct, emptyud
ApplicationRoot\wavixIV\HOOFDSCHERM
16-Oct-2008 16:19:04
5833 bytes
emptyud - maak een lege userdata structure aan
CALL:
ud = emptyud(stamp)
INPUT:
stamp: <datenum> initiele tijd stempel voor het veld timeofchange
TIP: laat dit veld overeen komen met het veld 'timeofcommit' in
een overkoepelende structure
OUTPUT:
ud: <struct> een 'lege' userdata structure.
APPROACH:
Deze functie komt in de plaats van een klassieke declaratie en maakt het
mogelijk om:
1. structures the rangschikken in een array
2. te testen op bepaalde veldwaardes, zonder een dergelijke test vooraf
te laden gaan door een test met 'isfield'
See also: emptyu, emptystruct
ApplicationRoot\wavixIV\HOOFDSCHERM
05-Dec-2006 01:21:24
1129 bytes
getwgbname - retourneer naam van werkgebied
CALL:
[filename,stamnaam] = getwgbname
INPUT:
geen invoer
OUTPUT:
filename: <string> volledige filenaam met pad
bv: 'C:\d\modelit\wavixIV\Untitled.wv4'
stamnaam: <string> filenaam zonder pad en extensie
bv: 'Untitled'
See also: setwgbname
ApplicationRoot\wavixIV\HOOFDSCHERM
19-Oct-2006 20:16:22
533 bytes
ApplicationRoot\wavixIV\HOOFDSCHERM
28-Nov-2008 11:17:09
1626 bytes
linestyle - properties voor lijnen binnen wavix CALL: lstyle = linestyle_wavix INPUT: geen invoer OUTPUT: lstyle: <struct> met lstyle.<linetype>: de properties van de lijn VOORBEELD: h = line(lstyle.hiaat); %initialiseer lijn
ApplicationRoot\wavixIV\HOOFDSCHERM
20-Mar-2007 13:35:20
5254 bytes
load_data - callback van menu 'laad werkgebied', user interface voor het
laden van een eerder bewaard werkgebied
CALL:
dummy = load_data(obj,event,fname)
INPUT:
obj: <handle> van de 'calling' uicontrol
event: leeg, standaard argument van een callback
fname: <string> naam van te laden file
OUTPUT:
geen directe uitvoer, userdata van de centrale database wordt aangepast
METHODE:
- Kijk of oude data bewaard moeten blijven (wav_check_exit)
- Haal de naam van de invoerfile op (getfile)
- Schakel interactie uit (mbd_suspend)
- Laad data
- Schakel interactie in (mbd_restore)
- Verwijder introtext
- Schakel menu's in die van data afhangen (activatemenus)
- Schakel de save menus uit (heractiveer ze bij de volgende aanroep van
wav_check_exit)
- Pas de naam van het window aan
- Update scherm (update)
ApplicationRoot\wavixIV\HOOFDSCHERM
15-Oct-2008 12:52:28
2223 bytes
load_wavixascii - callback van menu 'importeren', User interface
voor het laden van wavix2000 ascii bestanden
CALL:
dummy = load_wavixascii(obj,event,Sensortype,fname)
INPUT:
obj: <handle> van de 'calling' uicontrol
event: leeg, standaard argument van een callback
Sensortype: <string> met mogelijke waarden
'hoofd'
'neven'
fname: <string> naam van het te laden bestand
OUTPUT:
geen
De property 'userdata' wordt aangepast:
METHODE:
- Kijk of oude data bewaard moeten blijven (morf_check_exit)
- Haal de naam van de invoerfile op (getfile)
- Schakel interactie uit (mbd_suspend)
- Laad data
- Schakel interactie in (mbd_restore)
- Verwijder introtext (digivalwinresize)
- Schakel menu's in die van data afhangen (activatemenus)
- Schakel de save menus uit (de volgende aanroep van morf_check_exit
activeert ze weer
- Pas de naam van het window aan
- Update scherm (update)
ApplicationRoot\wavixIV\HOOFDSCHERM
15-Aug-2008 13:15:42
15401 bytes
save_data - callback van menu 'bewaar werkgebied'
User interface voor het bewaren van een werkgebied
CALL:
save_data(obj,event,fname)
INPUT:
obj: <handle> van de 'calling' uicontrol
event: leeg, standaard argument van een callback
fname: <string> (optioneel) naam van het te bewaren bestand
ongedefinieerd (nargin=2) ==> vraag gebruiker om filenaam
string ==> gebruik deze naam
empty string '' ==> gebruik werkgebied naam (tenzij
nog niet gekozen)
OUTPUT:
saved: <int> met mogelijke waarden:
- 1 als daadwerkelijk gesaved
- 0 als cancel ingedrukt
METHODE:
- Haal constantes en userdata op
- Bepaal de filenaam van de te bewaren data
- Schakel GUI tijdelijk uit (mbd_suspend)
- Bewaar data
- Activeer GUI (mbd_restore)
- Deactiveer save buttons
Deze worden bij de eerste wijziging weer door check_exit geactiveerd
ApplicationRoot\wavixIV\HOOFDSCHERM
15-Oct-2008 12:51:32
2937 bytes
selectinterval - selecteer meerdere periodes
CALL:
selectinterval(obj,event,mode,L,x1,x2,y1,y2)
INPUT:
obj: <handle> van de 'calling' uicontrol
event: leeg, standaard argument van een callback
mode: <string> meegegeven vlag bij aanroep
leftclick: met linkermuis in grafiek geklikt
range: met de linkermuis is op de rubberband geklikt
next: een actie is uitgevoerd op het huidige geselecteerde
tijdstip en er moet verschoven worden naar de
volgende periode
L: <int> locatieindex (locatienaam == db.loc(L).sLoccod)
x1: <datenum> begintijd selectie periode (optioneel)
x2: <datenum> eindtijd selectie periode (optioneel)
y1: ongebruikt
y2: ongebruikt
OUTPUT:
1 of meer periodes zijn geselecteerd
METHODE:
Het gedrag van de functie hangt af van de manier van aanroepen.
Als met 1 argument aangeroepen
Aangeroepen als callback van lijst met alfanumerieke data.
x1 bevat een lijst met periode indices.
Teken de bijbehorende cirkels.
Als met 2 argumenten aangeroepen (via zoomtool):
x1 = begin selectie periode
x2 = eind selectie periode
Bepaal alle tussenliggende, te selecteren periodes.
Gebruik de functie FIND_SELECTABLE om na te gaan welke periodes
in aanmerking komen.
Teken cirkels.
Selecteer Rijen uit lijst met alfanumerieke data.
ApplicationRoot\wavixIV\HOOFDSCHERM
05-Dec-2006 02:58:04
6311 bytes
set_meetbereik - gui voor het aanpassen van het meetbereik van de reeksen
in Wavix
CALL:
set_meetbereik(obj, event)
INPUT:
obj: <handle> van de aanroepende uicontrol
event: <leeg> standaard matlab callback argument
OUTPUT:
geen uitvoer, de settings van het hoofdscherm zijn aangepast, in het
veld DefineRange zijn de nieuwe bereiken van de meetreeksen opgeslagen
ApplicationRoot\wavixIV\HOOFDSCHERM
13-Feb-2009 13:53:16
8610 bytes
set_werkgebied - gui voor het aanpassen van de tijdstap in Wavix CALL: set_werkgebied(obj, event) INPUT: obj: <handle> van de aanroepende uicontrol event: <leeg> standaard matlab callback argument OUTPUT: geen uitvoer, een bestand met de naam wavix.opt wordt weggeschreven daarin staat in de variabele 'tijdstap' de tijdstap (10 of 60 minuten) aangegeven
ApplicationRoot\wavixIV\HOOFDSCHERM
13-Feb-2009 13:53:24
6712 bytes
setwgbname - wijzig naam van werkgebied
CALL:
setwgbname(filename,extra)
INPUT:
filename: <string> te gebruiken in titel van hoofdscherm en om op
te slaan
extra: <string> extensie voor de bestandsnaam
OUTPUT:
geen uitvoer
See also: getwgbname
ApplicationRoot\wavixIV\HOOFDSCHERM
19-Oct-2006 20:16:18
509 bytes
statreport - genereer een rapport
met statistieken per reeks geaggregeerd op globaal, locatie,
parameter en reeksniveau
CALL:
stats = statreport(obj, event)
INPUT:
obj: <handle> van de 'calling' uicontrol
event: leeg, standaard argument van een callback
OUTPUT:
stats: <matrix> met de gegenereerde statistieken
APPROACH:
Een rapport wordt gegenereerd op de console en
genoteerd in het logboek
See also:
ApplicationRoot\wavixIV\HOOFDSCHERM
28-Oct-2007 16:59:30
8311 bytes
undotoolbar - creeer standaard buttons voor toolbars in Wavix applicatie
CALL:
undotoolbar(C,present)
INPUT:
C: <struct> de wavix constantes
present: <vector> van lengte 8 met vlaggen voor het wel/niet opnemen van
de volgende buttons:
present(1): naar hoofdscherm
present(2): naar databeheer
present(3): naar netwerkbeheer
present(4): naar regressiebeheer
present(5): presenteren logboek
present(6): presenteren statistieken
present(7): help
present(8): reset redo/undo history
OUTPUT:
geen directe output, buttons worden aangemaakt in de toolbars,
wordt gebruikt in o.a. hoofdscherm, regressiebeheer, databeheer en
netwerkbeheer.
ApplicationRoot\wavixIV\HOOFDSCHERM
30-Oct-2006 11:29:10
3555 bytes
wav_check_exit - check of alle data bewaard zijn
CALL:
status = wav_check_exit
INPUT
geen invoer
OUTPUT
status == 0 ==> er waren geen onbewaarde data
status == 1 ==> er waren onbewaarde data, deze zijn bewaard
status == 2 ==> er waren onbewaarde data, deze zijn niet bewaard
status == -1 ==> er waren onbewaarde data, de gebruiker heeft
CANCEL ingedrukt
METHODE:
Check de status van het menu "save data"
Deze functie wordt aangeroepen iedere keer nadat iets in
de dataset wordt gewijzigd.
ApplicationRoot\wavixIV\HOOFDSCHERM
19-Oct-2006 20:17:48
1060 bytes
wavixmain - hoofdprogramma van de wavixIV applicatie,
installeert het wavix scherm
CALL:
wavixmain
INPUT:
geen invoer
OUTPUT:
geen directe uitvoer, het wavix scherm wordt geopend
See also: wavix, wavixview
ApplicationRoot\wavixIV\HOOFDSCHERM
13-Feb-2009 13:53:34
55433 bytes
wavixview - view functie voor het wavix hoofdscherm CALL: wavixview(udnew,opt,upd) INPUT: udnew: <struct> de centrale database opt: <struct> GUI settings voor wavix upd: <struct> de te updaten scherm elementen OUTPUT: geen directe output, het wavix hoofdscherm is geupdate
ApplicationRoot\wavixIV\HOOFDSCHERM
15-Oct-2008 12:36:18
57068 bytes
ComposeNetworkList -
ApplicationRoot\wavixIV\HULPFUNCTIES
30-Sep-2007 22:31:36
1957 bytes
ComputeStd - bereken the meetfout van de reeks in de dia
CALL:
sigma = ComputeStd(db,dia)
INPUT:
db: <struct> de centrale database met relevante velden:
- db.dia
- db.loc
dia: <struct> met de dia waarvoor de meetfout moet worden bepaald
OUTPUT:
sigma: de meetfout in de reeks
sigma = NaN als niet alle data voor het berekenen van de
meetfout aanwezig zijn in de database
DOCUMENTATION: vuistregels nauwkeurigheid golparameters Bram Roskam juni
2007, zie helpcenter
De nauwkeurigheid voor de windrichtingsklassen wordt gezet
op 30 als
- richting(in graden) gelijk is aan 0 (windstilte)
(richting 0 wordt weergegeven als 360 !)
- richting(in graden) gelijk is aan 990
(veranderlijke wind)
ApplicationRoot\wavixIV\HULPFUNCTIES
25-Jul-2007 14:13:10
7856 bytes
DisplayNet - display network characteristics CALL: string = DisplayNet(netwerk_lijst) INPUT: netwerk_lijst: OUTPUT: string: See also: invoer2string, uitvoer2string, stormtijden2string
ApplicationRoot\wavixIV\HULPFUNCTIES
13-Oct-2006 21:58:22
1837 bytes
binstatus2donstat - transformeer Wavix binaire status naar Donar codering
CALL:
donstat = binstatus2donstat(status)
INPUT:
status: <vector of uint8> binnen wavix gebruikte geaggregeerde binaire
status met:
bit 1: Hiaat
bit 2: Controle
bit 3: Outlier
bit 4: Validatie status
bit 5: Herkomst
OUTPUT:
donstat: <vector of uint8> Donar status, mogelijke waarden
0 : gewone waarneming
25: geinterpoleerde waarde
99: hiaat
ZIE OOK:
donstat2binstatus
setbinstatus
getbinstatus
binstatus2type
ApplicationRoot\wavixIV\HULPFUNCTIES
23-Dec-2004 17:42:04
1075 bytes
binstatus2type - haal statusbits op voor alle statustypes uit de
geaggregeerde status
CALL:
[statustype,bvalide,bherkomst,aggregstatus] = binstatus2type(status)
INPUT:
status: <uint8> de geaggregeerde status, elk bit stelt een status voor
OUTPUT:
statustype: <uint8> bepaald welk symbool geplot wordt, mogelijke waarden
- C.ALLES
- C.HIAAT
- C.OUTLIER
- C.ANDERS
bvalide: <uint8> mogelijke waarden:
- 1: valide
- 0: nog niet valide
bherkomst: <uint8> mogelijke waarden:
- 1: geinterpoleerd
- 0: niet geinterpoleerd
aggregstatus: <uint8> mogelijke waarden: (nog niet in gebruik)
- aggregstatus.numhiaat
- aggregstatus.numoutlier
- aggregstatus.numanders
- aggregstatus.numvalide
- aggregstatus.numtotal
METHODE:
deze procedure roept eerst "getbinstatus" aan om de uint8 statuscodes
te ontcijferen. Daarna wordt op basis van een aantal beslisregels een
statustype bepaald. Om te voorkomen dat een tweede aanroep van
getbinstatus noodzakelijk is worden ook enige andere attributen
geretourneerd.
ZIE OOK:
donstat2binstatus
binstatus2donstat
setbinstatus
getbinstatus
ApplicationRoot\wavixIV\HULPFUNCTIES
29-Oct-2007 05:27:26
5310 bytes
classify - deel de vector W in in klassen
CALL:
Classificatie = classify(W,Klassen)
INPUT:
W: <nx1 matrix> met de waarden die geclassificeerd moeten worden
Klassen: <rowvector> met de klassegrenzen voor W
OUTPUT:
Classificatie: <nx1 matrix> met de klassenummers
klassenummer = (nul of length(Klassen)) als een
element niet ingedeeld kan worden in een van de
opgegeven klassen
EXAMPLE: klassificeer de vector [1 m.b.v. de klassen [0 5 10 15]
12
9]
stap 1: maak van [0 5 10 15] A:=[0 5 10 15
0 5 10 15
0 5 10 15]
stap 2: maak van [1 B:=[1 1 1 1
12 12 12 12 12
9] 9 9 9 9]
stap 3: B > A --> C:=[1 0 0 0
1 1 1 0
1 1 0 0]
stap 4: bereken C.*[1 2 3 4 D:=[1 0 0 0
1 2 3 4 --> 1 2 3 0
1 2 3 4] 1 2 0 0]
stap 5: de klassen zijn nu het max(D,[],2)
oftewel [1
3
2]
ApplicationRoot\wavixIV\HULPFUNCTIES
22-Oct-2006 14:32:10
2385 bytes
constantes_wavix - definieer constantes voor de WAVIX applicatie
CALL:
C = constantes_wavix(dummy_arg)
INPUT:
DummyArgument: het definieren van tenminste 1 invoer argument heeft tot
gevolg dat een hulpscherm wordt gestart voor het instellen
van de opties
OUTPUT:
C: <struct> met een groot aantal velden ieder veld bevat een constante
die in de applicatie gebruikt wordt
APPROACH:
Door gebruik te maken van constantes wordt vermeden dat door kleine
spelfouten fouten in de applicatie sluipen die niet gedetecteerd worden
met een foutmelding.
Bovendien kunnen opties op deze wijze centraal gewijzigd worden
ApplicationRoot\wavixIV\HULPFUNCTIES
25-Jun-2008 11:01:34
12612 bytes
db2mat - zet de centrale database om in matrices
CALL:
Mat = db2mat(db,IDs,starttime,endtime)
INPUT:
db <struct> centrale wavix database
IDs: <vector> IDs van op te halen reeksen
starttime <datenum> (optioneel) het tijdstip van het begin van het
tijdsinterval waarvoor de data geselecteerd moet worden
endtime <datenum> (optioneel) het tijdstip van het eind van het
tijdsinterval waarvoor de data geselecteerd moet worden
N.B. als starttime en endtime niet gespecificeerd zijn dan
worden voor deze tijden de vroegste en laatste tijd van
alle dias gebruikt
OUTPUT:
Mat <struct> de database omgezet in een structure met de velden
- DiaIndx reeks index ivm terugschrijven data
- Wkey <struct> reekssleutel voor zoeken in NN definitie
+---- sLoccod
+---- sParcod
+---- sVatcod
- tijdsas gemeenschappelijke tijdsas van de vroegste
tot laatste waarneming van alle dias
- diatijd <matrix> dimensies: 2 bij aantal dias met in:
rij1 de startindex voor de tijdsas van de dia
rij2 de eindindex voor de tijdsas van de dia
- W In elke kolom van de velden
- stdW W, stdW, V, stdV en status
- V de waarden van de dia op de correcte
- stdV plek t.o.v. de tijdsas
- status
ApplicationRoot\wavixIV\HULPFUNCTIES
28-Nov-2006 17:41:14
10764 bytes
dbtools -
CALL:
[result,taxis,stdW] = dbtools(db,operation,varargin)
INPUT:
operation: <string> mogelijke waarden:
'ID2Indx'
haal de index van een dia op
result = dbtools(db,'ID2Indx',ID)
'Indx2W'
haal waarde en status op uit reeks met opgegeven
index en tijdas
[result,status] = dbtools(db,'Indx2W',indx,taxisRetrieve)
[result,status] = dbtools(db,'Indx2W',indx)
'ID2W'
Haal waarde en status op uit reeks met opgegeven
ID en tijdas
[result,status] = dbtools(db,'ID2W',ID,taxisRetrieve)
[result,status] = dbtools(db,'ID2W',ID)
'Dia2W'
Haal waarde en status op uit Dia structure en tijdas
[result,status] = dbtools(dia,'Dia2W',taxisRetrieve)
[result,status] = dbtools(dia,'Dia2W')
'getdia'
haal dia op met bekende ID
[result,status] = dbtools(db,'Dia2W',ID)
'get'
get the values of the dia specified by ID
[result,status] = dbtools(db,'ID2W',ID,taxisRetrieve)
[result,status] = dbtools(db,'ID2W',ID)
'getnetwerknamen'
haal de namen van de netwerken in het werkgebied op
result = dbtools(db,'getnetwerknamen')
'getconvnetwerknamen'
haal de namen van de conversie netwerken in het
werkgebied op
result = dbtools(db,'getconvnetwerknamen')
'getlocnamen'
haal de namen van de aanwezige locaties op
result = dbtools(db,'getlocnamen')
'getvarnamen'
haal de namen van de aanwezige variabelen op een
locatie op
result = dbtools(db,'getvarnamen',locatie)
'getveldappnamen'
haal de namen van de aanwezige veldapparaten op een
locatie voor een variabele op
result = dbtools(db,'getvarnamen',locatie,variabele)
ApplicationRoot\wavixIV\HULPFUNCTIES
22-Feb-2007 21:46:16
14816 bytes
donstat2binstatus - transformeer Donar codering naar Wavix binaire status
CALL:
status = donstat2binstatus(donstat)
INPUT:
donstat: <vector of uint8> Donar status, mogelijke waarden
0 : gewone waarneming
25: geinterpoleerde waarde
99: hiaat
OUTPUT:
status: <vector of uint8> binnen wavix gebruikte geaggregeerde binaire
status met:
bit 1: Hiaat
bit 2: Controle
bit 3: Outlier
bit 4: Validatie status
bit 5: Herkomst
ZIE OOK:
binstatus2donstat
setbinstatus
getbinstatus
binstatus2type
ApplicationRoot\wavixIV\HULPFUNCTIES
23-Dec-2004 18:21:10
1569 bytes
emptystruct - maak structures aan die al het goede formaat hebben
INPUT:
type: <string>
OUTPUT:
'convnetwerk' -> S.naam = ''
S.status = 0 %0 niet getrained, 1 getrained, 2 getrained zonder uitvoer informatie
S.netwerk = emptystruct('objnetwork')
S.data = emptystruct('data')
S.output = []
S.target = []
S.preprocess = emptystruct('preprocess')
S.ensemble = emptystruct('ensemble')
S.output = emptystruct('output')
S.Delta = []
'netwerk' -> S.naam = ''
S.status = 0 %0 niet getrained, 1 getrained, 2 getrained zonder uitvoer informatie
S.netwerk = emptystruct('objnetwork')
S.data = emptystruct('data')
S.output = []
S.target = []
S.preprocess = emptystruct('preprocess')
S.ensemble = emptystruct('ensemble')
S.output = emptystruct('output')
S.Delta = 0
'ensemble' -> S.herhalingen = []
S.trainingset = []
S.validatieset = []
S.testset = []
S.member = emptystruct('member')
'member' -> S.IW = []
S.LW = []
S.b = []
S.output = []
S.testindex = []
'preprocess'-> S.meanp = []
S.stdp = []
S.meant = []
S.stdt = []
S.transmat = []
S.pca = []
'tmpnetwork'-> S.naam = []
S.invoer = ''
S.uitvoer = ''
S.neuronen = []
S.transferfunctie = ''
S.trainfunctie = ''
S.doelfunctie = ''
S.herhalingen = []
S.pca = []
S.trainingset = []
S.validatieset = []
S.testset = []
'parameters' S.epochs = 100
S.goal = 0
S.lr = 0.0100
S.lr_dec = 0.7000
S.lr_inc = 1.0500
S.max_fail = 5
S.mem_reduc = 1
S.min_grad = 1.0000e-06
S.mu = 0.0010
S.mu_dec = 0.1000
S.mu_inc = 10
S.mu_max = 1.0000e+010
S.max_perf_inc = 1.0400
S.mc = 0.9000
S.deltamax = 50
S.delta_inc = 1.2000
S.delta_dec = 0.5000
S.delta0 = 0.0700
S.sigma = 5.0000e-005
S.lambda = 5.0000e-007
S.searchFcn = 'srchbac'
S.scale_tol = 20
S.alpha = 0.0010
S.beta = 0.1000
S.delta = 0.0100
S.gama = 0.1000
S.low_lim = 0.1000
S.up_lim = 0.5000
S.maxstep = 100
S.minstep = 1.0000e-006
S.bmax = 26
S.show = 25
S.time = Inf
'dia'-> S.ID = 1
S.blok = []
S.stdW = []
S.V = []
S.stdV = []
S.status = [] %status toegevoegd 21 aug 2004
'vhg'-> S.richting = []
S.snelheid = []
S.locs = []
S.factor = []
S.sigma = []
'model'-> S.stuurfile = ''
S.netwerkfile = ''
S.vhgfile = ''
'objnetwork' -> S.neuronen = []
S.transferfunctie = ''
S.trainfunctie = ''
S.doelfunctie = ''
S.parameters = emptystruct('parameters')
'toestand' -> S.main = ''
S.sub = ''
'proxy' S.poort = '80'
S.adres = 'proxy.minvenw.nl'
'matroosprefs' S.ftpsite = 'http://matroos2/matroos/timeseries/php/image_series_test.php/'
S.interval = 240
S.directory = ''
ApplicationRoot\wavixIV\HULPFUNCTIES
12-Sep-2007 14:47:52
10686 bytes
eval_outliers - Markeer punten die buiten het bereik vallen
CALL:
db = eval_bereik(db, guiopt, C)
INPUT:
db: <undoredo object> de centrale database
guiopt: <undoredo object> met de opties van het Wavix hoofdscherm
C: <struct> met wavix constantes
OUTPUT:
db: <struct> de centrale database met punten die buiten bereik
vallen gemarkeerd als outliers
ApplicationRoot\wavixIV\HULPFUNCTIES
15-Oct-2008 12:53:20
1448 bytes
eval_outliers - bepaal outliers CALL: db = eval_outliers(db,guiopt,C,label) INPUT: db: <struct> de centrale database guiopt: <struct> met de opties van het Wavix hoofdscherm C: <struct> met wavix constantes label: <string> commentaar voor het logboek OUTPUT: db: <struct> de centrale database met outliers
ApplicationRoot\wavixIV\HULPFUNCTIES
15-Oct-2008 12:54:10
3856 bytes
fieldnameprint - verwijder symbolen uit string
CALL:
str = fieldnameprint(str)
INPUT:
str: <string> de string waaruit niet-toegestane symbolen verwijderd
moeten worden
OUTPUT:
str: <string> de string met daarin ' ','(',')','[',']' verwijderd
en '/' en '\' vervangen door '_'
ZIE OOK:
databeheer
ApplicationRoot\wavixIV\HULPFUNCTIES
23-Dec-2004 10:34:48
649 bytes
get_C - haal de structuur met de wavix constantes op
CALL:
C = get_C
INPUT:
geen invoer
OUTPUT:
C: <struct> met wavix constantes
See also: undoredo/store get_db
ApplicationRoot\wavixIV\HULPFUNCTIES
13-Oct-2006 15:18:26
303 bytes
get_db - haal de databasestructure op uit de userdata van het hoofdscherm
CALL:
[db,C] = get_db
INPUT:
geen invoer
OUTPUT:
db: <struct> de centrale database
C: <struct> met constantes
See also: undoredo/store, get_C
ApplicationRoot\wavixIV\HULPFUNCTIES
13-Oct-2006 15:29:56
450 bytes
get_opt_databeheer - haal de settings van Wavix databeheer op
(deze moet wel opgestart zijn)
ApplicationRoot\wavixIV\HULPFUNCTIES
13-Oct-2006 13:29:24
204 bytes
get_opt_main - haal de options structure van het Wavix hoofdscherm op CALL: [guiopt, C] = get_opt_main INPUT: geen invoer OUTPUT: guiopt: <struct> met de opties van het Wavix hoofdscherm C: <struct> met de Wavix constantes
ApplicationRoot\wavixIV\HULPFUNCTIES
24-Apr-2007 22:24:48
418 bytes
get_opt_netwerkbeheer - haal de settings van Wavix netwerkbeheer op
(deze moet wel opgestart zijn)
ApplicationRoot\wavixIV\HULPFUNCTIES
13-Oct-2006 17:51:26
218 bytes
get_opt_regressiebeheer - haal de settings van Wavix regressiebeheer op
(deze moet wel opgestart zijn)
ApplicationRoot\wavixIV\HULPFUNCTIES
13-Oct-2006 19:31:12
227 bytes
getbinstatus - haal statusbits op voor alle statustypes uit de
geaggregeerde status
CALL:
[bhiaat,bcontrole,boutlier,bvalide,bherkomst] = getbinstatus(status)
INPUT:
status: <uint8> de geaggregeerde status, elk bit stelt een status voor
OUTPUT:
bhiaat: bit 1 van status
bcontrole: bit 2 van status
boutlier: bit 3 van status
bvalide: bit 4 van status
bherkomst: bit 5 van status
bdroogval: bit 6 van status
See also: bitget, donstat2binstatus, binstatus2donstat, setbinstatus,
binstatus2type
ApplicationRoot\wavixIV\HULPFUNCTIES
02-Nov-2007 16:41:34
2958 bytes
invoer2string - display an invoer-structure as a string
CALL:
string = invoer2string(invoer)
INPUT:
invoer: <struct> see emptystruct('TC')
OUTPUT:
string: <string>
See also: emptystruct, DisplayNet
ApplicationRoot\wavixIV\HULPFUNCTIES
13-Oct-2006 22:02:00
514 bytes
listW3H - vul een struct array van W3H structures op basis van een WAVIX
dia array
CALL:
W3Hs = listW3H(dia,indices)
INPUT:
dia: <struct array> met dia's (zie emptystruct('dia'))
indices: <vector> (optioneel) te gebruiken indices (default: alle)
OUTPUT:
W3Hs: <struct array> van het W3H gedeelte van een dia
ZIE OOK:
listRKS
ApplicationRoot\wavixIV\HULPFUNCTIES
23-Dec-2004 08:58:28
719 bytes
mattools - voer operaties uit op Mat,
Mat is verkregen uit de database door db2mat toe te passen
CALL:
Mat = mattools(Mat,operation,varargin)
INPUT:
Mat: <struct> de database in matrixvorm, verkregen met db2mat
operation: <string> de operatie die op Mat uitgevoerd moet worden:
- 'FillWwithV' vervang de hiaten door geschatte waarden
- 'DeleteDias' verwijder dias uit Mat
- 'KeepDias' behoud de opgegeven dias en gooi de rest
weg uit Mat
varargin: <vector> met indices van de te verwijderen of de te
behouden dias, is leeg voor de optie FillWwithV
OUTPUT:
Mat: <struct> de bijgewerkte database in matrixvorm, de
originele database is niet veranderd
See also: db2mat
ApplicationRoot\wavixIV\HULPFUNCTIES
03-Oct-2007 16:22:06
4314 bytes
parseNNInvoer - Utility om een aantal sleutels op te delen in 1 sleutel
per tshift zodat ze gebruikt kunnen worden voor de
neurale netwerken
CALL:
S = parseNNInvoer(invoer)
INPUT:
invoer: <struct array> met sleutels
velden: - sLoccod: <string>
- sParcod: <string>
- sVatcod: <string>
- tShift: <integer> lengte 1 of meer
OUTPUT:
S: <struct array> aantal inputs (1 per tijdstip) lang met sleutels
velden: - sLoccod: <string>
- sParcod: <string>
- sVatcod: <string>
- tShift: <integer> lengte 1
See also: sleutel2struct
ApplicationRoot\wavixIV\HULPFUNCTIES
14-Oct-2006 02:08:18
1461 bytes
reeksaanduiding - maak een header ID Locatie Parameter Sensor aan en
print voor elke dia deze gegevens
CALL:
[str,hdr] = reeksaanduiding(dia)
INPUT:
dia: <array of struct> met dia's
OUTPUT:
str: <string> de ID-Locatie-Parameter-Sensor combinaties voor elke dia
hdr: <string> de header 'ID Locatie Parameter Sensor'
ApplicationRoot\wavixIV\HULPFUNCTIES
23-Dec-2004 06:15:04
719 bytes
separatestr - deel de string op in delen die gescheiden worden door een
spatie ' '
CALL:
varargout = separatestr(string)
INPUT:
string: <string> de string die opgedeeld moet worden
OUTPUT:
varargout: <string> hierin komen de delen van de op te delen string
VOORBEELD:
[a,b,c] = separatestr('A B C') => a == 'A', b == 'B', c == 'C'
[a,b] = separatestr('A B C') => a == 'A', b == 'B C'
[a,b,c,d] = separatestr('A B C') => a == 'A', b == 'B', c == 'C', d == ''
ApplicationRoot\wavixIV\HULPFUNCTIES
23-Dec-2004 07:15:24
766 bytes
setbinstatus - aggregeer de statusbits tot 1 getal, door de afzonderlijke
bits van dat getal te zetten
CALL:
status = setbinstatus(bhiaat,bcontrole,boutlier,bvalide,bherkomst)
INPUT:
bhiaat: <(0 of 1)> bit 1 van status
bcontrole: <(0 of 1)> bit 2 van status
boutlier: <(0 of 1)> bit 3 van status
bvalide: <(0 of 1)> bit 4 van status
bherkomst: <(0 of 1)> bit 5 van status
bdroogval: <(0 of 1)> bit 6 van status
OUTPUT:
status: <uint8> geaggregeerde statusbits voor alle data
See also: donstat2binstatus, binstatus2donstat, getbinstatus,
binstatus2type
ApplicationRoot\wavixIV\HULPFUNCTIES
02-Nov-2007 16:42:10
1837 bytes
invoer2string - display an invoer-structure as a string
CALL:
string = uitvoer2string(uitvoer)
INPUT:
uitvoer: <struct> see emptystruct('TC')
OUTPUT:
string: <string>
See also: emptystruct, DisplayNet
ApplicationRoot\wavixIV\HULPFUNCTIES
13-Oct-2006 22:01:18
462 bytes
view_help -
ApplicationRoot\wavixIV\HULPFUNCTIES
01-Aug-2008 17:46:06
1917 bytes
exportmon - export het validatiemodel t.b.v van de wavix monitor
CALL:
exportmon(obj, event)
INPUT:
obj: <handle> van de aanroepende uicontrol
event: <leeg> standaard matlab callback argument
fname: <string> (optioneel) met de naam van het bestand waar het
validatiemodel heengeschreven moet worden, als niet
gespecificeerd dan verschijnt er een filebrowser.
OUPUT:
geen directe uitvoer, het validatiemodel is gesaved in een door de
gebruiker gespecificeerd bestand
ApplicationRoot\wavixIV\MONITOR
22-Nov-2007 09:22:48
2508 bytes
get_opt_monitor - haal de settings van Wavix monitor op
(deze moet wel opgestart zijn)
CALL:
[opt, HWIN, C] = get_opt_monitor
INPUT:
geen invoer
OUTPUT:
opt: <undoredo object> met settings van de monitor
HWIN: <handle> van het monitor scherm
C: <struct> met de wavix constantes
ApplicationRoot\wavixIV\MONITOR
05-Aug-2007 14:10:40
514 bytes
get_opt_monitor - haal de settings van Wavix monitor grafieken op
(deze moet wel opgestart zijn)
CALL:
[opt, HWIN, C] = get_opt_monitorgraph
INPUT:
geen invoer
OUTPUT:
opt: <undoredo object> met settings van de monitor grafieken scherm
HWIN: <handle> van het monitor grafieken scherm
C: <struct> met de wavix constantes
ApplicationRoot\wavixIV\MONITOR
14-Oct-2007 19:06:34
566 bytes
monitorgraphview - view functie voor het monitorgraph scherm CALL: monitorgraphview(udnew, opt, upd ,C , HWIN) INPUT: udnew: <struct> de centrale database opt_monitorgraph: <struct> GUI settings voor de wavix monitor grafieken opt_monitor: <struct> GUI settings voor de wavix monitor upd: <struct> de te updaten scherm elementen C: <struct> de wavix constantes HWIN: <handle> van het monitorgraph scherm OUTPUT: geen directe output, het monitorgraph scherm is geupdate See also: monitor, monitorgraph
ApplicationRoot\wavixIV\MONITOR
15-Oct-2008 12:33:34
11032 bytes
monitorview - view functie voor het monitor scherm CALL: monitorview(udnew, opt, upd ,C , HWIN) INPUT: udnew: <struct> de centrale database opt: <struct> GUI settings voor de wavix monitor upd: <struct> de te updaten scherm elementen C: <struct> de wavix constantes HWIN: <handle> van het monitor scherm OUTPUT: geen directe output, het monitor scherm is geupdate See also: monitor
ApplicationRoot\wavixIV\MONITOR
15-Oct-2008 12:50:18
11908 bytes
AnalyseNeuralNetwork - analyse tool voor een getrained neuraal netwerk
CALL:
AnalyseNeuralNetwork(NeuralNetwork)
INPUT:
NeuralNetwork: <struct> zie emptystruct('netwerk') met het netwerk dat
geanalyseerd moet worden
OUTPUT:
geen directe uitvoer, een scherm wordt geopend met analysetools
ApplicationRoot\wavixIV\NETWERKBEHEER
13-Feb-2009 13:54:38
15011 bytes
DefineNeuralNetwork - Definieer een nieuw feed-forward neuraal netwerk
CALL:
NeuralNetwork = DefineNeuralNetwork(NeuralNetwork)
INPUT:
NeuralNetwork: <struct> (optioneel) van het type 'netwerk',
zie emptystruct('netwerk')
NeuralNetwork kan leeg zijn of reeds aangemaakt
OUTPUT:
NeuralNetwork: <struct> van het type 'netwerk', zie
emptystruct('netwerk')
NeuralNetwork is leeg als operatie afgebroken is
ApplicationRoot\wavixIV\NETWERKBEHEER
22-Mar-2009 13:41:37
83660 bytes
ListAction - Handel commandos of die met sorttables te maken hebben
CALL:
ListAction(obj,event,hlist,mode)
INPUT:
obj: <handle> van aanroepende uicontrol
event: <leeg>
hlist: <jacontrol> van het type sorttable
mode: <string> bepaald uit te voeren actie, mogelijke waarden:
- exporteren
- toevoegen
- verwijderen
- wijzigen
- trainen
- analyseren
- saveasc
- savenet
- wistraining
- simuleren
OUTPUT:
geen directe uitvoer
ApplicationRoot\wavixIV\NETWERKBEHEER
15-Oct-2008 12:46:02
15538 bytes
ShowNeuralNetworkWeights - visualiseer de gewichten en bias van elke laag
en elk 'member' van een neuraal netwerk
CALL:
ShowNeuralNetworkWeights(NetworkStruct)
INPUT:
NetworkStruct: <struct> een neuraal netwerk zie emptystruct('netwerk')
OUTPUT:
geen directe uitvoer, de gewichten en bias van het netwerk worden
getoond in een hinton grafiek
ZIE OOK:
hinton
ApplicationRoot\wavixIV\NETWERKBEHEER
13-Feb-2009 13:55:06
9256 bytes
TrainNeuralNetwork - train een neuraal netwerk
CALL:
[NetworkStruct,comment] = TrainNeuralNetwork2(W,stdW,Wkey,NetworkStruct)
INPUT:
W: <matrix> gemeten waarden, aantal periodes bij aantal
reeksen groot
stdW: <matrix> standaardeviaties, aantal periodes bij aantal
reeksen groot
Wkey: <struct> met velden met bijbehorende (loc,var,veldapp)
combinatie per reeks
- sLoccod
- sParcod
- sVatcod
NetworkStruct: <struct> met een neuraal netwerk
(zie emptystruct('netwerk')
OUTPUT:
NetworkStruct: <struct> met een neuraal netwerk, de members worden in
deze routine gevuld, d.w.z. de gewichten en bias worden
gevuld
comment: <string> commentaar voor het logboek, wordt gebruikt in
netwerkbeheer
ApplicationRoot\wavixIV\NETWERKBEHEER
02-Sep-2007 10:21:02
15135 bytes
accessnode - haal de subscript (zie subsasgn) op om de opgeven knoop uit
de boom van structure (zie gettree) te kunnen benaderen
CALL:
S = accessnode(node,structure,tree,labels)
INPUT:
node: <int> de te benaderen knoop uit de boom
structure: <struct> de te benaderen structure
tree: <vector> met de boomstructuur, element i bevat de index
van knoop i's ouder, nul is de 'root' knooop
labels: <cell string> namen van de knopen
OUTPUT:
S: <cell array> met de subscripts voor het benaderen van
knoop i in de structure.
See also:
subsasgn, mbdsubsasgn, gettree
ApplicationRoot\wavixIV\NETWERKBEHEER
15-Oct-2008 12:29:54
1006 bytes
do_import_network - importeer netwerken voor het bijschatten van de
hoofdsensoren naar het werkgebied
CALL:
u = do_import_network(C,fname,NetworkArray,u)
INPUT:
C: <struct> met de wavix constantes
fname: <string> met de naam van het te importeren bestand
NetworkArray: <array of struct> van netwerken zie
emptystruct('netwerk')
u: <struct> de centrale database
OUTPUT:
u: <struct> de centrale database met de lijst met neurale
netwerken aangepast
ApplicationRoot\wavixIV\NETWERKBEHEER
01-Oct-2007 10:09:38
1990 bytes
gettree - haal de boomstructuur van een 'structure' op
CALL:
[tree,labels] = gettree(structure)
INPUT:
structure: <struct> een willekeurige 'structure'
OUTPUT:
tree: <vector> met de boomstructuur, element i bevat de index
van knoop i's ouder, nul is de 'root' knooop
labels: <cell string> namen van de knopen
ApplicationRoot\wavixIV\NETWERKBEHEER
22-Dec-2004 11:30:48
1109 bytes
hinton - hinton grafiek van een matrix en een vector in een raster met
vierkanten (w is normaliter een vector met weights,
b is normaliter een vector met biases)
de oppervlakte van elk vierkant stelt de grootte van het
corresponderde element.
de kleur is rood voor negatieve waarden, groen voor positieve
CALL:
hinton(w,b,max_m,min_m)
INPUT:
w: <matrix> afmetingen: MxN
b: <vector> afmetingen: Mx1
max_m: <double> (optioneel) maximum absolute waarde in w
default = max(max(abs(w)))
min_m: <double> (optioneel) minimum abasolute waarde in w
default = max(max(abs(w)))/100
OUTPUT:
geen directe output,
de hinton grafiek wordt afgebeeld in het huidige figuur
ApplicationRoot\wavixIV\NETWERKBEHEER
22-Dec-2004 12:53:32
3573 bytes
netwerkbeheer - installeer de netwerkbeheer GUI voor het importeren,
schatten en exporteren van neurale netwerkenreeksenlijst
CALL:
netwerkbeheer(obj,event)
INPUT:
obj: <handle> van de 'calling' uicontrol, (wordt niet gebruikt)
event: <leeg> standaard argument van een callback (wordt niet gebruikt)
OUTPUT:
geen directe uitvoer, het netwerkbeheer scherm wordt geopend
APPROACH:
Deze functie kijkt of het netwerkbeheer scherm al is geinstalleerd en
maakt het in dat geval current.
Zo niet, dan wordt het netwerkbeheer scherm geinitialiseerd.
Deze functie module bevat alle define- functies waarmee het scherm
wordt opgebouwd, en de meeste van de callback functies die vanuit het
scherm kunnen worden aangeroepen.
See also: netwerkbeheerview
ApplicationRoot\wavixIV\NETWERKBEHEER
13-Feb-2009 13:54:58
19079 bytes
netwerkbeheerview - view functie voor het netwerkbeheer scherm CALL: netwerkbeheerview(udnew,opt,upd,C,HWIN) INPUT: udnew: <struct> de centrale database opt: <struct> GUI settings voor netwerkbeheer upd: <struct> de te updaten scherm elementen C: <struct> de wavix constantes HWIN: <handle> van het netwerkbeheer scherm OUTPUT: geen directe output, het netwerkbeheer scherm is geupdate
ApplicationRoot\wavixIV\NETWERKBEHEER
15-Oct-2008 12:53:46
1438 bytes
nwbhconstants - definieer een aantal constantes die specifiek zijn voor
het netwerkbeheer scherm
CALL:
D = nwbhconstants
INPUT:
geen input
OUTPUT:
D: <struct> met de constantes die specifiek zijn voor het
netwerkbeheer scherm
ApplicationRoot\wavixIV\NETWERKBEHEER
30-Oct-2006 13:34:52
1053 bytes
plotperf - plot netwerk performance en als aanwezig ook de validatie- en
test performance
CALL:
stop = plotperf(tr,goal,name,epoch)
INPUT:
tr: <struct> trainingrecord
goal: <double> (optioneel) performance goal, default = NaN
name: <string> (optioneel) t goal, default = ''
epoch: <double> (optioneel) aantal epochs, default is lengte van
trainingsrecord
OUTPUT:
stop: <integer> afbreken training
ZIE OOK:
Matlab neural network toolbox - plotperf
ApplicationRoot\wavixIV\NETWERKBEHEER
23-Dec-2004 10:51:30
6450 bytes
readasciinetwork - lees netwerken in ascii formaat in
CALL:
[networkArray,db] = readasciinetwork(filename,db)
INPUT:
filename: <string> met het in te lezen bestand
db: <struct> de centrale database, wordt alleen voor
bijwerken logboek gebruikt. Mag [] zijn als functie voor
preview doelen wordt gebruikt.
OUTPUT:
networkArray: <array of struct> met netwerken
Opmerking: wanneer aanroep niet succesvol is wordt een leeg
[0x1] structure array gererourneerd. In de aanroepende
routine kan dus evt. getest worden met isempty(networkArray)
db: <struct> de bijgewerkte centrale database.
METHODE:
- lees de file in en verwijder commentaar (% regels)
- bepaal de indices van de blokken (sjabloon en netwerk)
- lees de sjablonen in en construeer tmpnetworkStruct
voor elk sjabloon met daarin de opgegeven velden
- lees de netwerken in en construeer voor elk netwerk
een tmpnetworkStruct met daarin de gedefinieerde
velden
- combineer de tmpnetworkStruct van de sjablonen met de
tmpnetworkStruct van de netwerken tot een
netwerkstruct
ApplicationRoot\wavixIV\NETWERKBEHEER
13-Sep-2007 09:52:06
35619 bytes
showbar - show bargraph with labels and selection
CALL:
showbar(h_axis,alpha,labelx,labely,selected,threshold,value)
INPUT:
h_axis: <handle> of axis to plot in
alpha: <matrix> with values
labelx: <string> label for x-axis
labely: <string> label for y-axis
selected: index van geselecteerde elementen (worden groen
gekleurd), niet-geselecteerde worden rood gekleurd
threshold: drempel waarboven de elementen groen gekleurd worden
value: 'value' -> de waarde wordt zichtbaar als op de bar
wordt geklikt
'index' -> de index wordt zichtbaar als op de bar
wordt geklikt
OUTPUT:
none, a bargraph is plotted in the specified axes
See also: bar, patch
ApplicationRoot\wavixIV\NETWERKBEHEER
20-Sep-2006 17:38:02
2125 bytes
writeasciinetwork - schrijf de netwerken in het werkgebied weg als
.asc bestanden
CALL:
writeasciinetwork(filename,net)
INPUT:
filename: <string> de naam van het ascii-bestand
net: <array van struct> zie emptystruct('netwerk')
OUTPUT:
geen directe uitvoer, de netwerken zijn in een ascii-bestand
weggeschreven
ApplicationRoot\wavixIV\NETWERKBEHEER
22-Oct-2006 20:20:58
3344 bytes
CalcEstimateInit - bepaal de verhoudingen tussen de variabelen op een
locatie met dezelfde variabele op de andere locaties.
Daarbij wordt onderscheid gemaakt tussen verschillende
windsnelheid en windrichtingsklassen op de locatie zelf
voor het geval dat er geen waarnemingen zijn voor een
bepaalde variabele op een locatie wordt er ook een
regressie uitgevoerd van de variabele met de windsnelheid
op de betreffende locatie
CALL:
[vhg,msgstr] = CalcEstimateInit
INPUT:
geen input
OUTPUT:
vhg: <vhg-struct> met de velden:
- richting :windrichtingsklassen
- snelheid :windsnelheidsklassen
- locs :locaties
- factor :de verhoudingsgetallen per
parameter aantal windsnelheidsklassen bij
aantal windsnelheidsklassen cellmat van
aantal locaties bij aantal locaties
matrices
- sigma :de spreidingen (zie factor)
msgstr: <string> met eventueel gegevens over de combinaties die
niet genoeg informatie bevatten om de regressie mee uit te voeren
AANPAK:
stap 1: Bepaal de hoofdsensoren voor de te schatten variabelen
en voor de windrichting en windsnelheid
stap 2: Selecteer een locatie en haal de windrichting en
windsnelheid reeksen van deze locatie op deze bepalen de
klassen voor de reeksen op deze locatie
stap 3: Selecteer een variabele (y) voor de locatie die in stap 2
geselecteerd is
stap 4: Selecteer dezelfde variabele als in stap 3 op een andere
locatie (x) en selecteer ook de windsnelheid op de locatie
van stap 2
stap 5: Maak de tijdsas van reeks x gelijk aan de tijdsas van reeks
y
stap 6: Voer voor elke windrichtingsklasse en windsnelheidsklasse
paar een regressie uit van het type y = a*x en bereken
tevens de spreiding van het residue
stap 7: Sla de resultaten op in een vhg-structure
ApplicationRoot\wavixIV\REGRESSIEBEHEER
25-Sep-2007 19:26:14
9173 bytes
ConfineDias2Dia - Voeg de waarden van verschillende dia's samen en houd
daarbij de tijdsas van de eerste dia aan
CALL:
[W,S] = ConfineDias2Dia(varargin)
INPUT:
varargin: <cell array> met tenminste 2 dia's
Output:
W: <matrix> van afmeting: lengte van waarden eerste dia bij
aantal dias gespecificeerd in varargin (length(varargin))
S: <matrix> (optioneel) zelfde afmeting als W, met de stdW
(spreidingen)
Aanpak:
Stap 1a: Maak de matrix W en zet de waarden van de eerste dia
(varargin{1}) in de eerste kolom, de andere kolommen bevatten
NaN's
Stap 1b: Als nargout == 2,Maak de matrix S en zet de stdW van de eerste dia
(varargin{1}) in de eerste kolom, de andere kolommen bevatten
NaN's
Stap 2a: Pak een voor een de waarden van de dias in varargin{2:end}
Stap 2b: Pak een voor een de stdW van de dias in varargin{2:end}
Stap 3: Maak de tijdsas van de vector gelijk aan de tijdsas van de
eerste dia, vul hiaten op met NaN's, maak gebruik van het feit
dat de dia's dezelfde tijdstap hebben
ApplicationRoot\wavixIV\REGRESSIEBEHEER
28-Nov-2006 17:41:04
4012 bytes
EstimateInit - Schat de reeksen van de hoofdsensoren bij m.b.v.
verhoudingsgetallen
CALL:
[msg,db] = EstimateInit(db)
INPUT:
db: <struct> de centrale database met relevante velden
- db.loc, de hoofdsensoren
- db.dia.W, de waarden van de hoofdreeksen
progressbar: <jacontrol> (optioneel) type jprogressbar
OUTPUT:
msg: <string> eventuele foutmelding
db: <struct> de velden V en stdV van de reeksen van de
hoofdsensoren in de database zijn geupdate
AANPAK:
stap 1: Bepaal de hoofdsensoren voor de te schatten variabelen
en voor de windrichting en windsnelheid
stap 2: Selecteer een locatie en haal de windrichting en
windsnelheid reeksen van deze locatie op, deze bepalen de
klassen voor de reeksen op deze locatie
stap 3: Selecteer een variabele (y) voor de locatie die in stap 2
geselecteerd is
stap 4: Selecteer dezelfde variabele als in stap 3 op alle andere
locaties (x) en selecteer ook de windsnelheid op de locatie
van stap 2
stap 5: Maak de tijdsas van de reeksen x gelijk aan de tijdsas van reeks
y
stap 6: Haal voor elke windrichtingsklasse en windsnelheidsklasse
de matrices op die met CalcEstimateInit zijn geschat en
bereken het gewogen gemiddelde van de aanwezige waarden,
(weging is omgekeerd evenredig met de spreidingsmatrix)
stap 7: bereken Th0 en Th3 als een schatting tussen de heersende
windrichting en de golfrichting in de vorige periode, stdV
wordt vast gekozen op 30
ApplicationRoot\wavixIV\REGRESSIEBEHEER
19-Oct-2007 09:49:18
11707 bytes
SensorMatrix - zet de hoofdsensoren in een matrix
Call:
[VarMat,WindMat,warning,Locs,Vars,Wind] = GetSensorMatrix(LocStructure,Locs,Vars,Wind)
Input:
LocStructure: <structure> (u.loc) met de nummers van de reeksen van
de hoofdsensoren met tenminste de velden:
1) WINDRTG
2) WINDSHD
3) De velden gespecificeerd
in Vars
en met tenminste de locatienamen gespecificeerd in Locs
aanwezig in LocStructure.sLoccod
Locs: <cell array> met de locaties waarvoor de hoofdsensoren
moeten worden gebruikt (moeten aanwezig zijn in
LocStructure.sLoccod)
Vars: <cell array> met de variabelen waarvoor de hoofdsensoren
moeten worden gebruikt (moeten velden zijn van
LocStructure)
Wind: <cell array> met de windvariabelen waarvoor de hoofdsensoren
moeten worden gebruikt (moeten velden zijn van
LocStructure)
Output:
VarMat: <matrix> length(Locs) bij length(Vars) met hoofdsensoren per
locatie voor de variabelen gespecificeerd in Vars
WindMat: <matrix> length(Locs) bij 2 met hoofdsensoren per locatie voor
WINDRTG en WINDSHD
warning: <cellstring> eventuele foutmelding of waarschuwing die in de
'calling' functie zal worden gemeld
Locs: <cell array> met de locaties waarvoor de hoofdsensoren
moeten worden gebruikt (zijn aanwezig in LocStructure.sLoccod)
Vars: <cell array> met de variabelen waarvoor de hoofdsensoren
moeten worden gebruikt (zijn aanwezig in LocStructure)
Wind: <cell array> met de windvariabelen waarvoor de hoofdsensoren
moeten worden gebruikt (zijn aanwezig in LocStructure)
ApplicationRoot\wavixIV\REGRESSIEBEHEER
22-Oct-2006 18:39:38
4407 bytes
buildmatstring - maak de matrix behorend bij de factor en sigma velden
van het vhg veld in de database
CALL:
str = buildmatstring(udnew,variabele,snelheid,richting)
INPUT:
udnew: <struct> de centrale database met relevant veld:
- udnew.vhg
variabele: <int> het nummer van de variabele in udnew.vhg
snelheid: <int> het nummer van de snelheidsklasse
richting: <int> het nummer van de richtingsklasse
OUTPUT:
str: <string> de waarden voor factor en sigma voor het
geselecteerde variabele in de opgegeven
windrichtings-windsnelheids klasse
ApplicationRoot\wavixIV\REGRESSIEBEHEER
13-Oct-2006 19:51:10
1540 bytes
buildpopupstring - maak de strings voor de popupboxen in het regressiebeheer
scherm (Variabele, Windsnelheid, Windrichting)
CALL:
str = buildpopupstring(udnew,mode)
INPUT:
udnew: <struct> de centrale database
mode: <string> met de mogelijke waarden:
'variabele'
'snelheid'
'richting'
deze geven aan voor welke popupbox de velden aangepast moeten
worden
OUTPUT:
str: <string> de popupstring voor de popupbox in het regressiebeheer
scherm, '<leeg>' als er nog geen hoofdsensoren aangewezen zijn
ApplicationRoot\wavixIV\REGRESSIEBEHEER
21-Dec-2004 13:58:02
1235 bytes
do_import_regmodel - Import regression model to the database
CALL:
u = do_import_regmodel(vhg,u,fname)
INPUT:
vhg: <struct> met het regressiemodel, met relevante velden
- richting
- snelheid
- locs
- factor
- sigma
u: <struct> de centrale database met relevante velden:
- data.vhg
- data.model.vhgfile
fname: <string> (optioneel) filename het te importeren regressiemodel
OUTPUT:
u : Updated database
ApplicationRoot\wavixIV\REGRESSIEBEHEER
01-Oct-2007 10:08:50
4512 bytes
regbhview - view functie voor het regressiebeheer scherm CALL: regbhview(udnew,opt,upd,C,HWIN) INPUT: udnew: <struct> de centrale database opt: <struct> GUI settings voor regressiebeheer upd: <struct> de te updaten scherm elementen C: <struct> de wavix constantes HWIN: <handle> van het regressiebeheerscherm OUTPUT: geen directe output, het regressiebeheer scherm is geupdate
ApplicationRoot\wavixIV\REGRESSIEBEHEER
15-Oct-2008 12:28:04
1473 bytes
regressiebeheer - installeer de regressiebeheer GUI
CALL:
regressiebeheer(obj,event)
INPUT:
obj: <handle> van de 'calling' uicontrol, (wordt niet gebruikt)
event: leeg, standaard argument van een callback (wordt niet gebruikt)
OUTPUT:
geen directe uitvoer, het regressiebeheer scherm wordt geopend
METHODE:
Deze functie kijkt of het regressiebeheer scherm al is geinstalleerd en
maakt het in dat geval current.
Zo niet, dan wordt het regressiebeheer scherm geinitialiseerd.
Deze functie module bevat alle define- functies waarmee het scherm
wordt opgebouwd, en de meeste van de callback functies die vanuit het
scherm kunnen worden aangeroepen.
See also: regbhview
ApplicationRoot\wavixIV\REGRESSIEBEHEER
15-Oct-2008 12:29:06
14751 bytes
shiftvector - verschuif de vector langs een tijdsas
CALL:
shiftvector = timeshift(vector,deltaT)
INPUT:
vector - <array> de vector die verschoven moet worden in de tijd
deltaT - <array of int> een vector met tijdverschuivingen b.v.
[-2 -1 0 1]
OUTPUT:
shiftvector - <array> lengte vector bij lengte deltaT elke kolom van
shiftvector is vector verschoven in de richting van een
element van deltaT en aangevuld met NaN's aan de randen
ApplicationRoot\wavixIV\REGRESSIEBEHEER
21-Dec-2004 10:52:26
989 bytes