turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Navigation
- Simcenter
- Forums
- Blogs
- Knowledge Bases

- Siemens PLM Community
- Simcenter
- 3D Simulation - Femap Forum
- Nonlinear matrix decomp. slow and fragmented

Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

Solved!
Go to solution

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-14-2016 12:09 PM - edited 09-14-2016 12:10 PM

I'm seeing some strange behavior on a model solve. I think it should be solving faster.

I'm using 6 CPUs (probably relevant). The CPU history is shown below, as is the reduction history. I've marked the points on the reduction history where the solution 'hangs'. I've got 20 GB of HD free, and 50 GB of RAM free.

It's really not a big model, only 73k nodes, but does have block connectivity between the parts with CGAP elements, and two very large RBE3 elements.

Solved! Go to Solution.

9 REPLIES

Solution

Solution

Accepted by topic author Kava

09-15-2016
11:58 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-14-2016 04:30 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-14-2016 05:27 PM - edited 09-15-2016 03:54 AM

The same model runs cleanly in linear static analysis though. Not exactly

the same actually, but with the same rbe3 clusters. I'll try some other

approach tomorrow.

*Update: It runs the same in linear analysis so matrix density due to RBE3s is almost definitely the problem. I'm still surprised that the solver stops being parallel, but that could be because there is so much coupling that there is no inherent parallelizability when moving over those RBE3s.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-14-2016 08:54 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-15-2016 03:55 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-15-2016 09:27 AM

Have you checked the f04 file to insure you have allocated enough RAM?

Also, please remember, RBE elements are linear only, and gaps assume no relative motion in the shear direction, both of these can lead to issues if you are considering large displacements.

Another thing that can go wrong in performance with nonlinear, if your model is mostly solids, did you constrain the 456 rotations? Be aware that AUTOSPC is off for nonlinear, did you check the grid point singularity table of the linear run to see what was being constrained there?

Regards,

Joe

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-15-2016 11:41 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-15-2016 01:15 PM

RBE2 and RBE3 elements are small displacement only by default. The equations are written based on undeformed geometry and are not updated as the model deforms.

The exception would be if you specified "thermal expansion rigids" to trigger Femap to write the case control command "RIGID=LAGRAN". This inables thermal expansion,large displacement and differential stiffness.

Glue connections are not a problem in nonlinear.

Also, just to reinforce, the f04 file contains the information required to understand any performance issues, you could post that if you would like some opinions on possible performance improvements.

Joe

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-18-2016 09:12 PM

I found this link useful...

http://mae.uta.edu/~lawrence/me5310/course_materia

Big RBE's add both density and bandwidth, but whether the bandwidth rather than density contributes any further penalty to the solve time I am not sure. Note that sparse solvers do internally renumber to minimize matrix bandwidth.

But one other thing to keep in mind is that if there is any moderate rotation in your RBE's in SOL106, the analysis will bisect to the smallest allowable load step every time (and noting that RBE's are small displacement elements). If you want to avoid excessive bisection, you can replace the RBEs with beams of appropriate stiffness... quite high for RBE2, quite low for RBE3 (but don't choose either too stiff or too flexible). Custom Tools | Element Update | Convert RBE to active Beam can be used to replace a big RBE with multiple beams. That's if you can't otherwise replace your current RBEs simply by distributing the load to multiple nodes.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-20-2016 08:13 AM

EndZ wrote:

Note that sparse solvers do internally renumber to minimize matrix bandwidth.

Not really. The one reodering I know of for sparse solvers is AMD, which doesn't minimize bandwidth at all, quite the opposite usually. It minimizes the number of reduction operations and storage required somehow.

https://en.wikipedia.org/wiki/Minimum_degree_algor

Follow Siemens PLM Software

© 2017 Siemens Product Lifecycle Management Software Inc