Showing results for 
Search instead for 
Do you mean 

LMS Test.Lab Throughput Processing Tips

by Siemens Experimenter Siemens Experimenter ‎09-11-2016 10:05 PM - edited ‎04-18-2017 10:29 PM

 

LMS Test.Lab Throughput Processing Tipslogo.png

 

 

Data processing can be an arduous task, but it does not have to be! Here are some tips for navigating the processing settings and organizing results.

 

Getting Started

 

In LMS Test.Lab, any time data can be analyzed using the Time Data Processing worksheet (Picture 1) which is located at the bottom of the screen.

 

Picture 1: Processing of throughput files can be done in the Time Data Processing worksheet.Picture 1: Processing of throughput files can be done in the Time Data Processing worksheet.

 

This worksheet can be accessed via Tools -> Add-ins -> Signature Throughput Processing (Picture 2).

 

Picture 2: Turn on throughput processing by selecting “Tools -> Add-in” in the main menu.Picture 2: Turn on throughput processing by selecting “Tools -> Add-in” in the main menu.

After pressing “OK” in the Add-in menu, the Time Data Processing worksheet should appear as a new worksheet at the bottom of the screen. Any data selected in the Time Data Selection worksheet can be processed.

 

See the article below are three tips for using throughput processing:

 

  1. Measurement Mode: Stationary versus Tracked
  2. Data Naming
  3. Original Section versus Active Section

 

1. Measurement mode: Stationary versus Tracked

 

In the Time Data Processing worksheet, a major data processing choice is made by selecting either ‘Tracked’ or ‘Stationary’ measurement mode under “Change Settings” in the “Acquisition Parameters” section.

 

 

Picture 3: The Acquisition parameters menu contains Tracked versus Stationary choices for measurement mode.Picture 3: The Acquisition parameters menu contains Tracked versus Stationary choices for measurement mode.

 

One can select between ‘Tracked’ and ‘Stationary’ measurement mode (Picture 3).  Depending on which one is selected, the processed data results will be very different.

 

Stationary measurements

 

When ‘Stationary’ Measurement mode is selected, the processed results consist of a single averaged spectrum for each time data channel (Picture 4). 

 

Picture 4: Stationary mode results in single averaged spectrumPicture 4: Stationary mode results in single averaged spectrumWhen should stationary mode be used?

 

Imagine that the issue you are tasked with occurs only when the engine is at 2400 rpm. You can measure using the stationary method at 2400 rpm for 30 seconds. The results will show you a single spectrum revealing the highest amplitude frequencies so you can apply damping or other countermeasures to solve the issue.

 

In stationary mode, one can specify free run averaging with a specific overlap.  Different types of averaging methods (maximum, minimum, average, etc) can also be selected.

 

Tracked measurements

 

Setting ‘Tracked’ for the measurement mode results in several spectra at user defined increments. With this mode it is easy to visualize how the data changes as a function of time, rpm, or angle (Picture 5).

 

Imagine you work for an automotive manufacturer and customers are complaining about a noise they hear while driving. You are tasked with solving the issue but the only information you are given is that the noise gets worse as engine rpm increases.  You can instrument the vehicle with accelerometers, microphones, and an engine tachometer on the drive shaft. Using the tracked method, you can identify any high amplitude signals that change with engine rpm. In throughput processing, one can perform order analysis.

 

 

Picture 5: Tracked mode results in a series of spectrums which can be displayed in a waterfall graphPicture 5: Tracked mode results in a series of spectrums which can be displayed in a waterfall graph

Tracked processing: Increment and Frame Settings

 

Often when using tracked processing, a tachometer is specified as a tracking channel (Picture 6). For example, one might process data from 1000 to 3000 rpm in increments of 25 rpm using the tachometer tracking channel.

 

Picture 6: Frame size and increment settings used in tracked throughput processingPicture 6: Frame size and increment settings used in tracked throughput processing

Tracked processing results in a buildup of several frequency functions, usually calculated spectra, and are often displayed in three dimensional plots called waterfalls (Picture 7) or colormaps. Each curve in the waterfall display is a separate processing calculation (or measurement), taken from the raw time data.  It is calculated based on user defined acquisition parameters, namely the increment and frame size.

 

Increments are the intervals between frame calculations. These intervals are usually either time or rpm based. For example in the Picture 7, a spectrum is calculated every 25 rpm, therefore our increment is 25 rpm.

 

Picture 7: Waterfall graph showing tracked spectrums based on rpmPicture 7: Waterfall graph showing tracked spectrums based on rpm

If tracking on time, the increment is the time interval (in seconds) between acquisitions. 

 

Picture 8 illustrates points of data acquisition in time at 1 second, 2 second, and 0.25 second increments.

 

 

Picture 8: Various processing increments – Top: 1 second, Middle: 2 seconds, Bottom: 0.25 secondsPicture 8: Various processing increments – Top: 1 second, Middle: 2 seconds, Bottom: 0.25 seconds

The frame size is the how much data is used for a calculation at each interval.  For example, an increment of 0.5 seconds and a frame of 1 second, would result in overlap processing of 50%.

 

When windows, such as hanning windows, are used they are applied to the frame.

 

Picture 9: Illustration of different frame sizes for data processing – Top: 1 second frame, Middle: 2 second frame, Bottom: 0.25 second framePicture 9: Illustration of different frame sizes for data processing – Top: 1 second frame, Middle: 2 second frame, Bottom: 0.25 second frame

The relationship between the size of the increment and the size of the frame determines what data is used for processing.  Depending on the setting, it is possible that some data will not be used (ie, data “gaps”) or that some data will be used multiple times (ie, data “overlap”).

 

Picture 10: Increment versus Frame size illustrationsPicture 10: Increment versus Frame size illustrations

One frame of data is collected, modified with a window, and reported as a single data point. The increment is the time between these data points.

 

When the frame size and increment match, all data are processed with no overlap, meaning no missed data (Picture 11). However, if there was a window applied, an amplitude and/or energy error could occur. This is because a Hanning window zeros much of the data at the beginning and end of the frame. If a large event coincided with the beginning of a frame, the Hanning window would significantly reduce the amplitude.

 

 

0% Overlap for processing of data when frame size and increment are equal0% Overlap for processing of data when frame size and increment are equalUsing a frame size larger than the increment can help with that (Figure 12). The ‘overlap’ allows more averages within the time duration of the test and helps reduce the error associated with the window.

 

10% Overlap for processing data, increment is smaller than frame size10% Overlap for processing data, increment is smaller than frame sizeThe combination of the frame size and increment determines the overlap.  For example, frame size of 1 second, with a 0.5 second increment, corresponds to a 50% overlap.

 

2. Data Naming

 

By default the results from a Throughput Processing calculation are saved into the active section under a new run name.  The default is “Tp” which is an abbreviation of “Throughput Processing”.

When looking back at processed data, if they are all named “Tp 1”, “Tp 2”, “Tp 3”, rather than “Run 1 with High Load”, “Run 2 with Medium Load”, “Run 3 with Low Load”, it can be difficult to tell which run the data came from originally, which is useful information when analyzing and reporting.

 

Picture 13: In the center of the Time Data Processing worksheet, click on the button labelled “…” allows different run naming optionsPicture 13: In the center of the Time Data Processing worksheet, click on the button labelled “…” allows different run naming options

The button next to the Name field opens the ‘Results destination options’ window (Picture 13). If the ‘Run name’ setting is changed to ‘Run Name Postscript’ the results will be saved into a new run as before, but with the new run name appended to the original run name making it easy to tell where the data came from as shown in Picture 14.

 

Picture 14: The ‘Run Name Postscript’ option contains the original run name and appends the extension.  The ‘Run name’ option does not preserve the original name.Picture 14: The ‘Run Name Postscript’ option contains the original run name and appends the extension. The ‘Run name’ option does not preserve the original name.

If data was processing without the ‘Run Name Postscript’ enabled, one can still find the original run name in the data properties.

 

Picture 15: Original run data propertiesPicture 15: Original run data properties

Right click on any processed data and select “Properties” to see information like:

 

  • Original project
  • Original project dir
  • Original run
  • Orignal section

 

3. Original Section versus Active Section

 

Take a LMS project with two sections: ‘Baseline’ and ‘Modified’.  Three data acquisition runs were performed in a baseline condition, and three runs were done in a modified condition.

 

Picture 16: LMS Test.Lab project with two sections and three runs with the same namesPicture 16: LMS Test.Lab project with two sections and three runs with the same names

Both the ‘Baseline’ and ‘Modified’ section contain runs with the same name: “Run 1”, “Run 2”, and “Run 3” as shown in Picture 16

 

If all six runs are processed at once, the default “Active section” setting in Throughput processing would create multiple runs of similar names in one section (Picture 17). This is confusing, because it is not clear which runs are associated with ‘Modified’ and which as associated with ‘Baseline’.

 

Picture 17: Throughput processing of all 6 runs with “Active Section” set and “Run name postscript”.  Data is mixed.Picture 17: Throughput processing of all 6 runs with “Active Section” set and “Run name postscript”. Data is mixed.

When performing throughput processing, there is an important setting that can avoid this confusing situation (Picture 18). The ‘Save Destination’ can be changed from the default ‘Active Section’ to ‘Original Section’.

 

Picture 18: The ‘Save Destination’ can be switched between ‘Active Section’ and ‘Original Section’Picture 18: The ‘Save Destination’ can be switched between ‘Active Section’ and ‘Original Section’

 

After processing the data, it is clear which data is from the ‘Baseline’ section and which data is from the ‘Modified’ section as shown in Picture 19.

 

Picture 19: With ‘Original Section’ all processed data returns is clearly stored with original sourcePicture 19: With ‘Original Section’ all processed data returns is clearly stored with original source

 

Note that if “Original Section” is selected, one must have the original project open.

 

Time files that are located outside the currently opened project cannot be processed with the ‘Original Section’ setting. This is because processed data can only be stored in the currently open project.

 

Conclusion

 

Hope you find these tips helpful for using Throughput Processing!

 

Questions?  Contact Us!

 

Related LMS Test.Lab ProcessingTips:

LMS Test.Lab Acquisition Tips:

LMS Test.Lab Display Tips:

Contributors