I have a 2 million dof solid TET10 model which I'm trying to perform linear contact on using NX Nastran 10.2. I'm at the stage where my model is taking about 1 hour to converge using the normal direct sparse solver. I've heard and seen presentations that suggest requesting the element interative solver which could decrease run-time 6-10 fold. Unfortunately, when I ran the job with the element iterative solver it took almost 5x as long as original runs using direct solvers.
I wanted to to know if anyone had any tips on the element iterative solver for linear contact runs? In particular if anyone has had any success and maybe suggesting certain BCPARAM entries or ISTEP bulk data entries that could decrease the convergence time without sacrificing accuracy too much.
Also I only have one subcase and my source and target mesh densities are exactly the same
With current version of FEMAP V11.2.2 and NX NASTRAN V10.2 I can tell you that a 2 Million DOF meshed with 3-D solid TET10 elements (ie, around 650.000 nodes) is impossible that the element ITERative solver to be in solution time 5x longer than the DIRECT SPARSE solver, then one reason could be that you are using an old version of both FEMAP and NX Nastran.
In fact, revising old versions numbers I can read:
Another reason could be that your setup for surface-to-surface contact is not correct: make sure to click in DEFAULTS, your options sould be similar to this picture:
I'm running Femap v11.2.2 and for sure NX Nastran 10.2.
My contact properties are similar to your screenshot with the exception of the following two entries:
-Max Contact Search Distance is smaller than yours, I'm using 0.03
-Initial Penetration is set to Option 3: Zero/Gap Zero Penetration
My element edge length in the contact region is 0.1
Those are the only two differences which I can't think they would cause the huge disparity in run time.
Sorry, I missunderstood you were using FEMAP V10.2!!.
Then post your model here and we will take a look to it, OK?.
Unfortunately I can't post the exact model, but maybe I can try to replicate it is another simpler model. I really think the solution to making the element iterative solution converge faster lies somewhere in changing the contact convergence parameters, but I just don't know which to change to get a faster run in.
Have you seen an appreciable time savings using the element iterative solver for a solid model?
In old versions of NX NASTRAN the PCG Iterative solver (element iter) didnt' use the parallel solver, and the DIRECT SPARSE SOLVER yes, then in that case (in old times) it was an advantage to use the DIRECT SPARSE vs ELEMENT ITER solver. But today not at all.
The DIRECT SPARSE SOLVER is the most robust solver, accurate and reliable option of NX NASTRAN (you get the exact answer), well-suited to sparse models where accuracy is desired. The problem is the huge RAM memory requirements, it makes its use prohibitive in models with say above 500.000 nodes, unless you have a lot of RAM memory (minimum 64 GB RAM and activate the use of the ILP-64 8-bytes-per-word solver). With Shells & Beams the DIRECT SPARSE solver always is my preferred option.
The ELEMENT ITERATIVE solver (please note that with NX NASTRAN you can choose between global iterative & element iterative) performs particularly well with solid element-dominated models. It may be a faster choice if lower accuracy is acceptable. Also, for problems involving contact and 3D solid elements, the element iterative solver is generally faster as compared to the sparse direct solver. And the most important, it requires a fraction of the RAM memory. Additionally, the element iterative solver uses significantly less disk space.
With the ELEMENT ITERATIVE solver, the software works entirely from the element matrices. Use of the element iterative solver yields typical run time improvements of 4x to 6x compared to the global iterative solver. These performance gains are most noticeable with models composed of mostly solid elements with millions of DOF.
There are several restrictions for using the element iterative solver. For example, the element iterative solver does not support superelements or inertia relief, and is supported by NX Nastran in SOL 101 analyses only.
Actually, I had the same problem in SOL 101. These are the times in the log file of the solutions:
Real: 2651.917 seconds ( 0:44:11.917) - direct solver
Real: 9668.405 seconds ( 2:41:08.405) - iterative solver
The software version is NX Nastran V10.2.
All the contact are set as default, with exception the initial penetration is set to zero.
Given the nature of the contact solution and the PCG solver, it is entirely possible that the elemental iterative solver might lead longer solution time than the sparse solver, SMP or not. This is particularly true for problems that need a good number of contact iterations.
The culprit is the preconditionner of the PCG solver...
You are using it properly. The elemental iterative solver is not necessarily the best choice for surface-surface contact in SOL101. This has nothing to do with a particular version or Nastran, or using SMP or not. This has to do with the PCG solver itself. At each iteration, preconditioning has to happen, which takes time and might or might not negate the benefit of the PCG solver. The exact samme reason the PCG solver might not be faster with multiple load cases ...
In your case, you might want to keep using the sparse solver, but look at CNTASET instead, i.e. NX Nastran is able to isolate the contact regions from the model and solve this reduced set of matrices during the contact iterations. This has lead to some significant performance improvement with large models that exhibited a relatively small contact region.
BTW, what you are seeing is not at all unique to NX Nastran, this is just the nature of a PCG solver.