I have 4 buffers in a manufacturing line between singleprocs. The capacity of the buffers can change but the sum (n) of them has to stay the same ex:
b1 = 5; b2 = 15; b3 = 5; b4 = 10; or n = 35
b1 = 3; b2 = 20; b3 = 7; b4 = 5 n = 35
I want to maximise the ThroughputPerHour and minimise the WorkInProgess by changing the buffer capacities. I wrote a method to define the experiment values for the ExperimentManager :
local tab := ExperimentManager.ExpTable;
for local x1 := 0 to n loop
for local x2 := 0 to n loop
for local x3 := 0 to n loop
for local x4 := 0 to n loop
if x1+x2+x3+x4 = n then --constraint
tab.writeRow(1, tab.ydim+1,true, x1,x2,x3,x4);
It works fine except that it takes extremely long especially if n is a big number. If n =20 then there are 17770 experiments to be executed and then you have to scroll down and try and find with your eyes what combination you think is the best (so you are not 100% sure if it is the best combination).
Any help on how to acchieve this quicker and more accurate?
quicker: use GA
The results are stored in the table result in the ExperimentManager. You can access with SimTalk to it. With some lines SimTalk you find your best Experiment e.g.:
--find max result
--look for the experiment
--name of the experiment
freelance simulation specialist
referring to the enumeration of your (4 buffer) loops there are several ways of speeding up the
the computing of the experiments.
- setting the loop steps > 1
- using the distrIbuted simulation provided by the experiment manager to simulate the experiments on the number of available computer cores.
- using the neural net (NN) provided by PSi to compute the (buffer dependable) experiment behaviour. Once taught with a simulated subset of (random) buffer sizes (e.g. 200), the NN can compute the requested target values at a far higher speed than the simulation system.
Check the experiment manager and the libraries for details.