cancel
Showing results for 
Search instead for 
Did you mean: 

Importing many large results + Deleting many useless vectors

Experimenter
Experimenter

I have ~85 1.2GB result files (.op2) that I need to import and do enveloping/visualization on.  

Each one has ~3k output vectors.  I have a list of 550 output vectors that I care about.  After the results are imported and then I use the global ply feature to compile a vector of core shear stresses in a complicated laminate, then the number of vectors I care about goes down to about 100.  

 

These result files are all from SOL106 runs.  

 

I have hacked together some API macros to

- import results given a list of files in excel

- rename result set titles based on what they actually contain

- scrub unwanted vectors from output sets (3k vectors --> 550 vectors)

 

This is taking a ridiculous amount of memory (easily 48 GB committed at this point) and time.

Is there a way to scrub OP2s directly?  I cannot find documentation on the file format but would be more than willing to write a native OP2 read/writer that could delete unnecessary results before importing.  

 

Is there another method to accomplish this?  Anyone done something similar and found a better way to handle a similar situation?

Even attaching the result files takes hours, and I find that I cannot scrub-down the results to condense the amount of data (because they are attached, not imported, then they are read-only? also global ply seems to not work).

 

Am running v11.1.1.  

10 REPLIES

Re: Importing many large results + Deleting many useless vectors

Genius
Genius

How are you handling the files? Have you tried opening / processing / closing on each OP2, instead of leaving all processed OP2 open / attached?

Re: Importing many large results + Deleting many useless vectors

Experimenter
Experimenter

I'm using feFileReadNastranResults.  Currently importing all of them, then deleting unnecessary vectors from all of them, then global ply, then another round of deleting.

 

I am working on doing this one at a time.

 

Still interested in being able to do this outside of FEMAP with the file format for OP2.  That way it can be parallelized.

Re: Importing many large results + Deleting many useless vectors

Siemens Phenom Siemens Phenom
Siemens Phenom
Have you tried "File - Attach to Results", you can multi-select as many .op2 files as you want, and then envelope on the attached results. No data is internalized, so it should be pretty quick.

Re: Importing many large results + Deleting many useless vectors

Experimenter
Experimenter

Unfortunately this is not pretty quick.  It takes ~1.5 minutes to attach to a single OP2.

Also it does not allow me to delete unnecessary vectors to reduce the amount of data that must be loaded into memory.

Also it does not work with the global ply functionality - presumably because the files are read-only.  

 

Re: Importing many large results + Deleting many useless vectors

Creator
Creator

Hello!

I have the same problem. A lot of data and I need to create vector using the material orientation because op2 file does not have vector on material orientation defined by the user. Differently from *.op2, *.f06 has the results at material orientation.

 

It is taking to much time to create vector on material orientation.

Re: Importing many large results + Deleting many useless vectors

Phenom
Phenom
The method you have described in the original post might be the best way to do the job. You may want to make sure there is a File -> Rebuild step each time you delete the multiple results vectors - to condense the model -, then save, and repeat for 85 op2 files sets. This may take a very good computer 1 or 2 hours (automated), which would be reasonable for 100GB of results. You are probably doing it already, but make sure everything that matters (including Femap scratch directory and Windows page file) is on SSD's preferably connected by SATA3 (good) or PCi (best). Make absolute certain that you have done the Read/Write test on File -> Preferences Database (after setting the fastest SSD location for Femap scratch), which will set the Femap scratch/undo file operations to the fastest mode for your scratch disk.
No doubt there are ways to further optimize what you are trying to do, but being a little blunt... if you have 100GB of results and you need to process data across all results sets, then 48 GB of memory is no longer a ridiculous amount - in fact you are lucky it is such a small amount. At about $10 per GB, I suggest you jam as much memory as you can in your computer (128GB would be a good start for models with 100GB of results), and your return on investment will be large. Also, even though 1.5 minutes for a 1.2 GB result is still "mediocre to fair", it does sound constrained by the speed of your computer and the amount of memory you have and/or other I/O limitations. Unlike the simple job that (eg.) videos renderers have where 100GB of data is dealt with sequentially, the process you are attempting requires access to multiple chunks of that 100GB across a range of locations and multiple files in parallel, which is an order of magnitude more complex than rendering.
If any of your process is constrained to mechanical disk(s), no matter how good, then settle in for the long haul.

Re: Importing many large results + Deleting many useless vectors

Phenom
Phenom
And finally... depending on the way you have done your API, you may want to temporarily reduce the Undo levels (File -> Preferences-> Database) to 0 or 1, to make sure no resources are being wasted on saving temporary information that you do not want/need. You probably need to restart Femap for that to take effect.
Remember to set it back up to a sensible number once all your results processing automation has been completed.

Re: Importing many large results + Deleting many useless vectors

Phenom
Phenom

In the very near future I think the answer will be the HDF format. The op2 is a hierarchical binary format that 'lacks' standardization; the hdf format is a widely spread hierarchical binary format readable by most modern codes. MSC Nastran have already switched to it in their latest release, I hope Siemens will do the same soon.

 

So regarding your comment "Is there a way to scrub OP2s directly?  I cannot find documentation on the file format but would be more than willing to write a native OP2 read/writer that could delete unnecessary results before importing.":

soon enough yes, op2 will be easily attackable direclty (or rather it'll be the hdf), and no you won't even have to write the parser yourself, they already exist!

 

I don't know whether developers will enable FEMAP to read hdf files, if not there will always be FEMAP csv files to re-import modified output.

 

AP

Re: Importing many large results + Deleting many useless vectors

Experimenter
Experimenter
I'm looking to do something similar--did you ever figure this out? I am also interested in just an API to load in a bunch of .op2s if you already have that.