Showing results for 
Search instead for 
Do you mean 

NASTRAN Throwback Friday and Cloud-based HPC with Rescale

by Legend ‎11-13-2015 05:15 PM - edited ‎11-13-2015 05:25 PM

Cleaning up my archives, I stumbled upon a blast from the past: a 25-year old picture of a NASTRAN mesh of the top level loads FEM of the ARIANE 5 main structure. 

 

 

This naturally triggered a grey-beard rambling about the good old days of NASTRAN and HPC...  Punch cards, green-on-black terminals, VMS editor, ...  I always chuckle when one of the younger guys in the office complains about their 2M dof NX NASTRAN model taking more than 10 minutes to solve on their workstation!  Patience little grasshopper, this used to take 3 days Smiley Happy

 

Back then, this top-level FEM was considered a fairly large model, around 100,000 degrees of freedom.  To calculate the different flight loads, we would send these models to the department's CRAY, which would crunch numbers for a few hours and return the 100KB+ pch file for post-processing...  At the end of the month, we would get the bill for how much CPU time we used.  A nice way of corralling (and publicly shaming) the trigger-happy NASTRAN analyst.

 

Moving forward to present days and trying to avoid the overused Moore Law cliché as much as possible, some methods haven't changed tremendously (NASTRAN is still NASTRAN, ...) but the size of the problems have grown consistently (and dramatically).  Our typical dynamics problem is between 3-5Mdof, over a large frequency range with a lot of modes to compute, response analysis could feel unresponsive Smiley LOL  100GB+ op2 are common trend in our office.

 

Thanks to hardware and software advances, we are now simulating much larger and more complicated problems than ever.  And this trend is certainly not slowing down or inverting any time soon!  The model complexity is increasing at a faster pace than the engineering workstation and workgroup server we have in the office.  Our hardware refresh cycle is several years, while our FE model complexity increase cycle is several months.  How do we solve this problem?  Easy: bigger computers!  At this point, to make a significant difference, "bigger" means HPC cluster and all the baggage it comes with: dedicated datacenter room on premise and dedicated IT staff...  Although the coolness factor of having our own datacenter would be great, this is not necessarily a financially viable solution for a company our size.

 

Then comes Rescale to the rescue: imagine the ability to be behind the wheel of an Aston Martin DB9 (or a Tesla P90D with Ludicrous Speed upgrade of course!), but only paying while you're driving it, and never have to worry about maintaining it, or even putting gas in it (or charging it)!  That is pretty much what Rescale offers for our company: the ability to use an enormous amount of computing power (and disk space!) at will, and only paying for CPU usage!

 

Rescale's ScaleX™ allows us to feel like the Cluster is really ours, sitting in our own datacenter in the back of our office.  Submitting, monitoring and retrieving NX NASTRAN jobs on Rescale's platform is as easy (if not easier) as it is on our local workgroup server, security is quasi-identical (we work under ITAR restrictions) and performance is phenomenal, thanks to NX NASTRAN's DMP scalability.  And best of all, just like in the "good old days" (actually better as it is in real time), I know exactly what the cost of each simulation will be!