I am using membrane elements 1e-6m thick.
Nastran seems to go to sleep doing the calculations, however it is busy doing something since it consumes almost 700Gb of scratch space.
17:52:48 Analysis started.
17:52:48 Geometry access/verification to CAD part initiated (if needed).
17:52:48 Geometry access/verification to CAD part successfully completed (if needed).
17:52:56 Finite element model generation started.
17:53:37 Finite element model generated 2571032 degrees of freedom.
17:53:37 Finite element model generation successfully completed.
17:53:48 Application of Loads and Boundary Conditions to the finite element model started.
17:53:49 Application of Loads and Boundary Conditions to the finite element model successfully completed.
22:33:32 Solution of the system equations for frequency response started.
22:33:33 Solution of the system equations for frequency response successfully completed.
forrtl: error (200): program aborting due to window-CLOSE event
I killed the job at 9am. It had written nothing to the f04 since before 1am.
Rather frustrating really.
I too am battling with a random analysis of a large model, which has TET10s, but I have skinned it with CTRIA6 membrane elements and only asking output for these membrane elements.
The shell coat method is probably the only way to get results in a semi-timely manner with SOL111. That's precisely what we used to do before we started using SATK.
It's beyond frustration...
I am discussing the issue with "technical support" adn see what come out of it. Will post something if usefule.
I'll look at the my f04/log but it seems that the problem is very siimilar I top ~900Gb I/O or scracth space
It seems that the rms calculation is simply not not very efficient which is quite surprsing to me. It might be that it is related to the number of elements being processed Not to many - Not to few- just the right number is needed ...
the thickness used for the 2D coat (menbrane) is a bit less than what I would use (0.1mm) but it probably makes not difference
Are you sure you guys have enough scratch, and are instructing Nastran to use more than the default (we usually set SSCR=800GB on our 1TB SDIR in the rcf file)?
When we use shell coats, we typically use 1E-5 in thickness, doesn't seem to create numerical instabilities, and we'e too chicken to use anything thicker :-)
might be. I have only 400Gb left on my machine .Everything is run locally
I'll check with my test case the scr requirement. If for the ~200/300 elements the scr size used was say ~150Gb then I wouldn't be surprised the whole thing fall flat on its face with a (very) large model.
I really didn't anticipate that such "simple" task (calculating rms value!!) requires so much memory/disk
can alwasy run teh model with 2 or 3 differnt menbrane thicknesses and if the "key" modes freq. don't change you know it make no difference! I am always" wary" of usinf extreme values
Coudl you post the output request you have defined for your SOL111? I may have something...
My output request is
SET 1 = 1000001
ACCELERATION(SORT2,PUNCH,RPUNCH,NORPRINT,PHASE,PSDF) = 1
SET 2 = 94400000 THRU 94513937
STRESS(SORT1,PUNCH,RPUNCH,NORPRINT,PHASE,RMS) = 2
no MEFFMASS specifed then. Hmmm not sure then
I am using
STRESS(NORPRINT) = 2
running a new analysis with my full/real model. Will post something if it works
No, I recovered the modalmasses in a separate run. This run was purely for stresses, and I recover the acceleration at the base input so I can confirm that the input is correct.
Let me know if you find a way. I am all out of sufficient scratch space at the moment.
a small step fwd but still crashed. I excluded the MEFFMASS output and this time a clear msg issued by Nastran: USER INFORMATION: THIS ERROR MAY BE CAUSED BY EXCEEDING THE CAPACITY OF A SYSTEM RESOURCE
Summing all the HIWATERS from the .f04 it seems that I need at least 490Gb available on my desk (only have 420Gb left!)
Will do a couple of test and try my restart from SOL103
How of interest what are your constraints on the node(s) beiing excited?