I'm seeing some strange behavior on a model solve. I think it should be solving faster.
I'm using 6 CPUs (probably relevant). The CPU history is shown below, as is the reduction history. I've marked the points on the reduction history where the solution 'hangs'. I've got 20 GB of HD free, and 50 GB of RAM free.
It's really not a big model, only 73k nodes, but does have block connectivity between the parts with CGAP elements, and two very large RBE3 elements.
Solved! Go to Solution.
The same model runs cleanly in linear static analysis though. Not exactly
the same actually, but with the same rbe3 clusters. I'll try some other
*Update: It runs the same in linear analysis so matrix density due to RBE3s is almost definitely the problem. I'm still surprised that the solver stops being parallel, but that could be because there is so much coupling that there is no inherent parallelizability when moving over those RBE3s.
Have you checked the f04 file to insure you have allocated enough RAM?
Also, please remember, RBE elements are linear only, and gaps assume no relative motion in the shear direction, both of these can lead to issues if you are considering large displacements.
Another thing that can go wrong in performance with nonlinear, if your model is mostly solids, did you constrain the 456 rotations? Be aware that AUTOSPC is off for nonlinear, did you check the grid point singularity table of the linear run to see what was being constrained there?
I'm doing small displacement stuff, but I'm surprised to hear that the coupling code is linear-only. Even the edge-solid weld connection? So such a connection couldn't be rotated 90° in a NLA for example?
RBE2 and RBE3 elements are small displacement only by default. The equations are written based on undeformed geometry and are not updated as the model deforms.
The exception would be if you specified "thermal expansion rigids" to trigger Femap to write the case control command "RIGID=LAGRAN". This inables thermal expansion,large displacement and differential stiffness.
Glue connections are not a problem in nonlinear.
Also, just to reinforce, the f04 file contains the information required to understand any performance issues, you could post that if you would like some opinions on possible performance improvements.
Note that sparse solvers do internally renumber to minimize matrix bandwidth.
Not really. The one reodering I know of for sparse solvers is AMD, which doesn't minimize bandwidth at all, quite the opposite usually. It minimizes the number of reduction operations and storage required somehow.