I am trying to run a simulation in NX Nastran 8.1 and it will get stuck.
It is a large model of Alu frame containing aprox. 4.5M nodes, 600k quad+tri and 2.5M Tetra10 elements. Bolts modeled as rigid elements. The load is only gravity in Sol 101.
The simulation starts, in f06 are no errors, no warnings and it stucks after weight check and
' *** USER INFORMATION MESSAGE - SINGULARITIES FOUND USING EIGENVALUE METHOD....etc' Then it writes only SUBCASE = 1
I left it run over night and day, because of the size of model, but nothing changed.. same status in f06, same size of scratch files. CPU load is 100%.
Then I tried to coarsen the solid mesh and reduced the model to only 0.7M Tetra10 elements, the rest is same. The simulation finishes in 20min without any problems.
Do you have any idea what the problem could be?
Harware: 8x Intel 3.2Ghz, 256GB RAM, 1.2TB of disc space (no ssd).
Thank you for any advice!
The input file is here:
INIT DBALL LOGI=(DBALL(400GB))
INIT SCRATCH LOGI=(SCRATCH(400GB)),
TITLE = XXX
ECHO(PLOT) = NONE
DISP(PLOT) = ALL
STRESS(PLOT) = ALL
SUBCASE = 1
PARAM GRDPNT 0
PARAM AUTOSPC YES
PARAM PRGPST NO
PARAM EPZERO 1.E-7
SPC1 1 12345 10 thru 13
GRAV 10 0 9810. 0.0 0.0 -1.0
Solved! Go to Solution.
The .f04 and .log files may provide more information. Unfortunately, the IO buffer for the .f04 and .f06 files does not get flushed at every write. On large solutions, there could be multiple lines of text indicating that the next few steps are complete that are not written to the output files in a timely manner.
It is interesting that the CPU is at 100%. The typical bottleneck for Nastran is IO. The more usual complaint is that the log files are not showing any progress and the CPU is next to nothing. This is due to the CPU waiting on IO.
Post the .f04, .f06 and .log files to give a more complete picture.
Here in problems related with nastran I/O Jim is the expert, I am always learning from him. As you mentioned that the model has 2.5M TET10 elements (and not any mention to exist contact between solid and shell surfaces, in this case avoid its use), then I suggest to activate the Element Iterative Solver instead the default DIRECT SPARSE matrix solver, is extremely fast. Simply include the following in the Executive Control Statement (before CEND):
Also I suggest to include the expecification for the maximum number of CPUs=8 to activate shared-memory parallel (SMP) processing in several numeric modules. SMP processing reduces elapsed time at the expense of increased CPU time.
The simulation has to be run in Nastran ILP solver, not IL.
The IL can allocate 4 bytes/word, which was not enough for the large model. The ILP uses 8bytes/word, therefore you can use more memory.
The simulation has to be ran using keyword nastran64L (not only nastran64).
Thanks for your effort!
Hopefully this can help someone too