I am not even sure if this is possible but i figure i'd ask and see if anyone has done something like this. I am working on a project that has the following design work flow requirements.
1. Pass a file name for the Part (Sheet Metal /Part) file to be opened
2. Once opened it will pass variables into the part variables table and update geometry.
3. Update the Flat pattern if required
4. Export a DXF
This is all able to be done thru the API with a simple program. However when you get into the 10,000+ files a day the processing time starts to get into the hours. 6+ hour range of continuous running on a single machine. This volume is from across 4 plants.
The next step would be to have the process divided up so that multiple CAD Desktops are able to process the parts in parallel. The issues is that it will require the cost of a CAD desktop and a Solid edge license to reduce the time.
It might just be me but the cost to scale this solution to 20-30,000+ files seems like a lot from the licensing/Hardware side compared the functionality that it is being used from the software. Yes i understand that the value of the license/hardware is offset by the value being genereated by the parts.
Does anyone know of another way to solve this problem?
When tackling problems like this, I always like to start with challenging assumptions. For example, 2500 new parts coming out of a single plant in a single day sounds like a lot to me. I would question that process and validate that it is absolutely necessary to be generating that many parts. Besides the challenges you've mentioned, I forsee many other challenges just managing that much data. If in fact the process is valid and no options for tweaking it exist, then you can scratch that off and move on to how to deal with it.
I'm unaware of any other way to process the files as you've described without automating Solid Edge. Since we can't change that assumption, then we move to how can we speed up the process. If it were me, I would look into doing a custom built CAD workstation with the fastest CPU, memory & disks you can get. I would consider overclocking the CPU and add watercooling if necessary. For disks, I would look into the newer PCIe SSDs. They are crazy fast compared to SATA III SSDs.
The last thing I would focus on is the automation code performance. Read my How to open documents silently blog post and comments. I would absolutely write my code in native C++ too. While it can be done in .NET, you're going to be dealing with issues that don't exist in C++. I would also consider writing the automation logic in an addin since that would give you the benefit of in-process communication vs out-of-process communication. In that scenario, you would likely need a 2nd application to monitor Solid Edge in the event that it crashed so it could restart it. There are a few API things you can do to increase performance as well but you'll have to play around a bit with time trials to find the optimal settings.
Those are my thoughts for now. I may have more as the discussion progresses.
Thanks you your response. I kind of figured it was a long shot and most likely was not available but you got to ask or youll never know.
Yes it is a lot of files unfortunately . My company has the challenge of selling a lot of different types of sheet metal products. One example is a product that can be order in 1/16" increments well varying from 24"W x 24"H to 48"W x 96"H and 3" to 16" in depth containing about 5-10 unique parts. We try to reduce the unique parts using a configurator and standard parts in 1" increments. However the number of possible application and orderable configurations does put a heavy demand on our engineering staff. We try to keep from doing these types of standard specials but at the end of the day the customer gets what they want.
As for equipment were using HP Z420's with 3.70Ghz quad core, 16GB ram, with Sata III SSD and 2GB and NVIDIA Quadro K2000 video cards. This combination can get the cycle time down to 3-4 seconds. Then it just a volume problem.
From my experience solidedge seem to not take advantage of the the extra cores even with multiple instances of the application running silently and the process Affinity set to its own core. This lead to the reasoning that if we are to run the process in parallel we would have to do this in the distributed node like system.
The down fall to a system like this it that the node would have be a standalone desktop to not break any of the licensing agreements. So for each node (HP Z420) we would have to have a solid edge license assigned to it. We could use floating licenses and try to leverage the license when its not being used. Its just that the order information is processed in real time and not a overnight batched process.
There are lots of ways to do something like this. I was just hoping that the cost to scale was not linked to the cost of a license as they very costly.