Calibration uses a known signal to calculate the sensitivity of your transducer. For example, with accelerometer calibration we use a know signal such as calibrator that outputs 1 g RMS at 79.6 Hz or 159.2 Hz. We then measure the amount of voltage that the accelerometer sees when it is placed on that calibrator. From that we can calibrate the accelerometer and calculate the sensitivity as V/g or mV/g.
The microphone calibrator does the same thing via a calibrator that outputs a known signal such as 94 or 114 Pa dB RMS at 250 Hz or 1000 Hz to calculate the sensitivity as mV/Pa or V/Pa.
If you log into the Global Technical Assistance Center (GTAC) with your webkey and password, we have several tutorials at https://support.industrysoftware.automation.siemens.com/docs/lms/test.lab/. The Signature_Testing tutorial would show this. Also, in the software on the Calibration worksheet if you click the ? in the top right corner of the screen that will take you to the Help file for that specific worksheet.
The steps are described in my previous response. Here they are in more detail: Go to the Calibration worksheet, select the Unit as Pressure, select your microphone channel, put the microphone on a microphone calibrator (pistonphone), supply a known pressure signal (usually 94, 104, or 114 Pa dB RMS) at a known frequency (usually 250 or 1000 Hz), enter that information into the software and then press Check, then Start. If that sine wave frequency is found on the selected the channel it will calibrate and then you can Accept the New Sensitivity.
The process is the same in all the acquisition workbooks that support calibration. If you need further assistance, please reach out to your local customer services team.