turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Navigation
- Simcenter
- Forums
- Blogs
- Knowledge Bases

- Siemens PLM Community
- Simcenter
- 3D Simulation - Femap Forum
- NX Nastran Element Iterative Solver: Linear Contac...

Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-05-2016 07:07 AM - edited 03-05-2016 07:10 AM

Hi everyone,

I have a 2 million dof solid TET10 model which I'm trying to perform linear contact on using NX Nastran 10.2. I'm at the stage where my model is taking about 1 hour to converge using the normal direct sparse solver. I've heard and seen presentations that suggest requesting the element interative solver which could decrease run-time 6-10 fold. Unfortunately, when I ran the job with the element iterative solver it took almost 5x as long as original runs using direct solvers.

I wanted to to know if anyone had any tips on the element iterative solver for linear contact runs? In particular if anyone has had any success and maybe suggesting certain BCPARAM entries or ISTEP bulk data entries that could decrease the convergence time without sacrificing accuracy too much.

Also I only have one subcase and my source and target mesh densities are exactly the same

Thanks,

stressman

9 REPLIES

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-05-2016 08:23 AM

Hello!,

With current version of **FEMAP V11.2.2** and **NX NASTRAN V10.2** I can tell you that a 2 Million DOF meshed with 3-D solid TET10 elements (ie, around 650.000 nodes) is impossible that the element ITERative solver to be in solution time 5x longer than the DIRECT SPARSE solver, then one reason could be that you are using an old version of both FEMAP and NX Nastran.

In fact, revising old versions numbers I can read:

**FEMAP V10.3 & NX Nastran V8.0**: Performance improved for models with vary large number of element connection faces & Improved accuracy for contact pressure output.**FEMAP V11.0 & NX Nastran V8.5**: Improved memory allocation for contact and glue. Helps when there are large number of glue/contact elements.

Another reason could be that your setup for surface-to-surface contact is not correct: make sure to click in DEFAULTS, your options sould be similar to this picture:

Best regards,

Blas.

Blas Molero Hidalgo, Ingeniero Industrial, Director

IBERISA • 48004 BILBAO (SPAIN)

WEB: http://www.iberisa.com

Blog Femap-NX Nastran: http://iberisa.wordpress.com/

IBERISA • 48004 BILBAO (SPAIN)

WEB: http://www.iberisa.com

Blog Femap-NX Nastran: http://iberisa.wordpress.com/

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-05-2016 09:04 AM

Hi Blas!

I'm running Femap v11.2.2 and for sure NX Nastran 10.2.

My contact properties are similar to your screenshot with the exception of the following two entries:

-Max Contact Search Distance is smaller than yours, I'm using **0.03**

-Initial Penetration is set to **Option 3: Zero/Gap Zero Penetration**

My element edge length in the contact region is **0.1**

Those are the only two differences which I can't think they would cause the huge disparity in run time.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-05-2016 11:04 AM

Dear Stressman,

Sorry, I missunderstood you were using FEMAP V10.2!!.

Then post your model here and we will take a look to it, OK?.

Best regards,

Blas.

Blas Molero Hidalgo, Ingeniero Industrial, Director

IBERISA • 48004 BILBAO (SPAIN)

WEB: http://www.iberisa.com

Blog Femap-NX Nastran: http://iberisa.wordpress.com/

IBERISA • 48004 BILBAO (SPAIN)

WEB: http://www.iberisa.com

Blog Femap-NX Nastran: http://iberisa.wordpress.com/

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-05-2016 09:35 PM

Hi Blas!

Unfortunately I can't post the exact model, but maybe I can try to replicate it is another simpler model. I really think the solution to making the element iterative solution converge faster lies somewhere in changing the contact convergence parameters, but I just don't know which to change to get a faster run in.

Have you seen an appreciable time savings using the element iterative solver for a solid model?

Thanks!

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-06-2016 05:57 AM

Dear Stressman,

In old versions of NX NASTRAN the PCG Iterative solver (element iter) didnt' use the parallel solver, and the DIRECT SPARSE SOLVER yes, then in that case (in old times) it was an advantage to use the DIRECT SPARSE vs ELEMENT ITER solver. But today not at all.

The **DIRECT SPARSE SOLVER** is the most robust solver, accurate and reliable option of NX NASTRAN (you get the exact answer), well-suited to sparse models where accuracy is desired. The problem is the huge RAM memory requirements, it makes its use prohibitive in models with say above 500.000 nodes, unless you have a lot of RAM memory (minimum 64 GB RAM and activate the use of the ILP-64 8-bytes-per-word solver). With Shells & Beams the DIRECT SPARSE solver always is my preferred option.

The **ELEMENT ITERATIVE solver** (please note that with NX NASTRAN you can choose between global iterative & element iterative) performs particularly well with solid element-dominated models. It may be a faster choice if lower accuracy is acceptable. **Also, for problems involving contact and 3D solid elements, the element iterative solver is generally faster as compared to the sparse direct solver**. And the most important, it requires a fraction of the RAM memory. Additionally, the element iterative solver uses significantly less disk space.

With the **ELEMENT ITERATIVE** solver, the software works entirely from the element matrices. Use of the element iterative solver yields typical run time improvements of 4x to 6x compared to the global iterative solver. These performance gains are most noticeable with models composed of mostly solid elements with millions of DOF.

There are several restrictions for using the element iterative solver. For example, the element iterative solver does not support superelements or inertia relief, and is supported by NX Nastran in SOL 101 analyses only.

Best regards,

Blas.

Blas Molero Hidalgo, Ingeniero Industrial, Director

IBERISA • 48004 BILBAO (SPAIN)

WEB: http://www.iberisa.com

Blog Femap-NX Nastran: http://iberisa.wordpress.com/

IBERISA • 48004 BILBAO (SPAIN)

WEB: http://www.iberisa.com

Blog Femap-NX Nastran: http://iberisa.wordpress.com/

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-06-2016 05:45 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-11-2017 10:10 AM

Actually, I had the same problem in SOL 101. These are the times in the log file of the solutions:

Real: 2651.917 seconds ( 0:44:11.917) - direct solver

Real: 9668.405 seconds ( 2:41:08.405) - iterative solver

The software version is NX Nastran V10.2.

All the contact are set as default, with exception the initial penetration is set to zero.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-15-2017 11:10 PM

Given the nature of the contact solution and the PCG solver, it is entirely possible that the elemental iterative solver might lead longer solution time than the sparse solver, SMP or not. This is particularly true for problems that need a good number of contact iterations.

The culprit is the preconditionner of the PCG solver...

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-17-2017 01:43 PM - edited 04-17-2017 05:00 PM

In your case, you might want to keep using the sparse solver, but look at CNTASET instead, i.e. NX Nastran is able to isolate the contact regions from the model and solve this reduced set of matrices during the contact iterations. This has lead to some significant performance improvement with large models that exhibited a relatively small contact region.

BTW, what you are seeing is not at all unique to NX Nastran, this is just the nature of a PCG solver.

Follow Siemens PLM Software

© 2017 Siemens Product Lifecycle Management Software Inc