Cancel
Showing results for 
Search instead for 
Did you mean: 

Neural Networks

Pioneer
Pioneer

Hi everyone,

 

I'm working with my model 2 .Now is ok. I use experiment Manager for optimization Buffers and now is time with NeuralNetworks in technomatix plant simulation.

In experiment manager :   For the multi-level experimental design is :

multi-level experiment.jpg

I put my Experiment Manager in NeuralNet

Can someone explain me , what doesn mean in the diagrams (digram1,diagram2,diagram3 and diagram4). I used Examples in Technomatix Plant simulation and also Help in Technomatix Plant Simulation, step by step  researched every model, but I still don't understand some informations about this diagrams.

 

I really appreciate your answers

 

diagram1 and diagram2.jpg

diagram3 and diagram 4.jpg

 

 

Best regards,

Andreas Domuz

 

 

 

 

 

3 REPLIES

Re: Neural Networks

Siemens Phenom Siemens Phenom
Siemens Phenom

Hello Andreasps,

 

there is a relatively extensive documentation of this tool in chapter Reference Help. Basic knowledge about Neural Networks is required. The shown diagrams can be reproduced by a corresponding example of the Example Collection: Category: Tools and Optimization, Topic: Neural networks, Example: Production system (In this way all readers can have benefit.).

 

The quality of the training is analyzed in the lower group of the second tab and explained in chapter Training and Checking the Neural Network of the Reference Help. You must know that the data from the Experiment Manager are divided into training data and validation data (as it is mentioned on this page of the documentation). Your first and second diagram shows large relative errors during the training progress (The x-axis is the number of learning steps).

I recommend trying other settings in the Configuration dialog.

Maybe you can increase the number of hidden layers on the first tab. Please note, that if you want to make a change you must reset the training.

It is also possible to improve the training by the adequate treatment of the noise of the data. Try the setting Noise by percentage. The variance is frequently too large.

 

Such large errors often happen after a small number of learning steps. You selected 100. Increase the number of training steps to 1000.

 

Also the fourth diagram shows large differences with respect to the values of an input value on the x-axis. It seems that the training was not successful.

 

But your third diagram is surprisingly good: The curve of the dependence of a pair of input value and output value has the expected shape.

 

Regards,

Peter

Re: Neural Networks

Pioneer
Pioneer

Thank you Peter,

 

It is very interesting for me this Manufacturing process in Neural Networks.

What type of Neural Networks is in Technomatix Plant Simulation . Is it 

Multilayer perceptron MLP?

I trying to understand this process and MLP general ---

 

I have knowledge about Neural networks general-theoretical, there are more chapters in web site-scribd,academia,google about Artificial Neural Networks ,but I didn't find any Chapters which has explanation about numbers -

Numbers -What does mean in production process.

numbers..jpg

 

Best regards,

Andreas Domuz

Re: Neural Networks

Siemens Phenom Siemens Phenom
Siemens Phenom

Hello Andreas,

 

the Structure of the Neural Network (NN) is described in a corresponding chapter of the Reference Help. The notation perceptron is not used in this documentation.

There are many text books about NN. Two books are mentioned in the Reference Help on the first page of the chapter Artificial Neural Network. A short overview of technical details is in the Technical Report Back Propagation Family Album (1996).

 

I want to explain the basic ideas. A trained NN is described by two (or three) matrices, the so-called weights. The backpropagation learning algorithms minimizes the error between the output of the NN (calculated by the weights and the activation function) and the training data. The learning algorithm uses the gradient idea.

 

The Magnitude of activation Beta is evaluated by the activation function.

If the relative error is too small then the training algorithm cannot detect the direction of improvement. A reinitialization of the weights is necessary.

 

For each learning step we get matrices of weights. An update of the weights uses the weights of the last learning step and the step before. Both methods are applied in each step of the training. The first method uses the dynamically adapted learning rate Eta. The second method is called momentum method and uses the parameter Alpha.

 

At the beginning of the training the weight are choosen at random. Its size is determined by Magnitude of weights.

 

The dialog contains recommendation for these parameters. The success of the training depends from these parameters. Please try only small changes.

 

Regards,

Peter