Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with System ID not populating Test Predictions #7

Open
ArakisIII opened this issue Aug 23, 2024 · 3 comments
Open

Issue with System ID not populating Test Predictions #7

ArakisIII opened this issue Aug 23, 2024 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@ArakisIII
Copy link

More of a question than a bug since this may just be a problem with how I set up my hardware. Hoping you can provide some guidance on why I may be encountering this issue in the Random Vibration environment. I have a single axis voice coil shaker powered by two audio amplifiers. I am using a NI DAQ USB 6002, and an analog accelerometer set up as a differential input. I have the accel as the first entry in the Channel table and the analog output to the audio amplifiers teed off to an analog input and that is the second entry. I believe I have these all set up relatively correctly as I am able to get pretty far in the System ID steps such that Rattlesnake is controlling the output to the vibe table and reading in and displaying the response.

I have loaded up a modified (reduced Grms) specification from the example you created in another issue on here along with pseudoinverse control, then I perform the system ID. I see the transfer function and phase plots start to populate and take shape. Then when the System ID stops after 20 frames. I switch to the Test Prediction tab. Here I see the Output RMS anywhere from 2 to 20+ depending on my Vrms output during the test predication step. The Response Error is always 0 db. With the "Real Spec" shown but nothing else. I can switch to the Run Test tab and start a test, but this results in the output spiking and the DAQ control loop erroring stating the output is too high.

I also tried the pre-compiled 2.0 version, which lets me get to the Test Perdiction step, but upon completion of all the steps, it give me an error shown below:
Screenshot 2024-08-22 174348

Any help or tips on how to debug further would be greatly appreciative. Thanks for the awesome work!
Screenshot 2024-08-15 182339
Screenshot 2024-08-22 174554

Rattlesnake_SignalGenTableList_5.xlsx

@ArakisIII ArakisIII added the question Further information is requested label Aug 23, 2024
@dprohe
Copy link
Collaborator

dprohe commented Aug 24, 2024

First, the reason that the 2.0 version isn't working is it looks like you have it pointing at the control law file from the 3.0 version. The 2.0 control laws used fewer arguments than the 3.0 control laws (the 3.0 control laws receive more arguments to allow you to make control decisions using some computed data quality metrics).

As far as your question regarding your initial setup with the 3.0 version goes, it sounds to me like the software is mostly working correctly. With a 1x1 control problem, you should get a prediction of 0 dB error, because at each frequency line, the controller thinks it can just turn up or turn down the volume until it matches. Basically, you can invert a 1x1 matrix to exactly match any response desired. The response error plot should appear to be only one line, because the prediction line should be identically on top of the specification line due to the zero db error. Because you only have a 1x1 CPSD matrix, and the diagonals of CPSD matrics are real, there should be no imaginary part. Only the off-diagonal terms of the CPSD matrix have imaginary parts, and with a 1x1 CPSD matrix, there are no off-diagonal parts.

It seems like the real error is that the output that the controller desires is too large for your data acquisition system to output. You put a limit of +/- 4V in your channel table; however even with a 2 V RMS output signal, we would expect the peaks to be something like 6-8 V. You might try increasing the gains of your amplifiers so a smaller voltage output from the data acquisition system results in larger response. This should result in less voltage out from the controller resulting in a similar response.

I do have some questions on how you have your channel table set up, which may be contributing to your issues with requiring too much voltage. It looks like you have your accelerometer is not set up as an acceleration channel, but rather a voltage channel with sensitivity of 80 mV/V. I've seen this done when you are using an external signal conditioner; however you also have ICP turned on on that channel, so I'm not sure exactly what your setup is. Regardless, when you specify a channel in the NI-DAQmx libraries with a voltage type, the sensitivity and current excitation source are ignored; it's just measuring the raw voltage. So what's happening is the controller software thinks the sensitivity of that channel is 80 mV/V, but the data acquisition system never applies that sensitivity, so you are getting out the raw voltage as if it were 1000 mV/V. As you can imagine, this could add a 12.5x scale factor that is not accounted for, and may be the reason that you are getting a larger voltage than you expect. The other thing could be the issue is that you have the specification loaded from the that other issue, which was defined in G^2/Hz. The specification doesn't have units associated with it, but assumes it's in the same units as your channel table, so you're actually specifying your test in V^2/Hz, and the scaling in your test between Gs and Vs for your accelerometer is also perhaps adding some unknown scale factor between what you wanted your test to control to and what Rattlesnake is trying to control your test to.

As far as the variability between the output predicted from 2-20 V, I usually only see that when your system identification phase is in the noise floor of the test. The transfer functions are then noisy and can vary run to run. If the response is large enough and the transfer functions are clean, baring any huge nonlinearities in the system, you should see fairly consistent transfer functions being developed, which means if your specification is not varying, your output voltage should also not be varying.

My suggestion would be to modify your channel table to make your accelerometer channel have the Acceleration type, and enter the unit as Gs and type in the sensitivity in mV/G. Then your specification, which is defined in Gs, will be consistent with your channel table. Alternatively, if you must keep it as a voltage channel for whatever reason, put the sensitivity of the channel to 1000 mV/V, which is what the data acquisition system will actually use. Any other value will result in a disconnect between what the controller thinks is happening and what the data acquisition system is actually doing.

@ArakisIII
Copy link
Author

First off, thank you for such a detailed response! This is giving me a lot more understanding of how Rattlesnake is working under the hood. I think I am a lot closer to understanding the issue with my system.

For the acceleration vs voltage channel type. I have it set to voltage since for some reason Rattlesnake throws an error when it is set to acceleration. Mostly likely due to limitations with the USB DAQ I am using, since even in NI DAQExpress channels cannot be setup as accelerometers. Despite searching around, I could not figure out why. This may be reserved for NI hardware with IEPE, ICP, CCLD capabilities. I am using a simple MEMs type DC Accelerometer setup in differential mode. I understand now the excitation should not be used for this type.

Per your suggestions I have opened up the range on the accel input and response channels to [-10 - +10]. I have also set the sensitivity to 1000mV/V and removed the Excitation Source/Current columns. Loading up the specification from before and running the System ID steps I am still seeing similar results. As part of previous debugging steps I did play around the with gain on the amplifiers but as you suggested I settled on the highest gain setting whenever doing the System ID steps, so unless my amplifiers have some undiagnosed issue, they should be outputting there max. I ran the System ID steps multiple times in a row without changing anything and get varying results again at
0.0100 Vrms (Pseudorandom) [12.167, 29.401, 23.500, 56.025]
0.100 Vrms (Pseudorandom) [15.440, 17.204, 31.405 8.581]
0.100 Vrms (Random) [8.451, 18.378, 9.716, 8.894, 54.691]

The above results leads me assume I am operating at the noise floor as you've mentioned. My next steps will be to verify my response channel is correctly teed off from the DAQs analog output and properly wired to the DAQs analog input. I've already used the Modal Environment in 3.0 to generate a sine wave and Rattlesnake is reading the response as expected, at least visually confirming the peak to peak voltage is correct. But perhaps even at 0.1 Vrms during the SysID steps there is too much noise.

Also, to clarify your response regarding the CPSD error being zero. Would the "Imag" prediction component of the output plot on the left in the Sys ID tab also be zero? Forgive my limited understanding here, as it's been a number of years since dealing with controls, and imaginary numbers.

Thank you again for you help!

Screenshot 2024-08-26 162753
Rattlesnake_SignalGenTableList_6.xlsx

@dprohe
Copy link
Collaborator

dprohe commented Oct 1, 2024

I've never used DAQExpress hardware nor your specific accelerometer, so I don't know what the noise floor looks like for that hardware combination, however, from my previous experience, something like 0.1 V should be high enough to be out of the noise floor (for a 24-bit data acquisition system with 10 V range, you should have resolution down to the microvolts), so I am surprised at the variation in output prediction you are seeing. I'd specifically look at the "Levels" and the "Coherence" plots on the System Identification tab to judge whether you're in the noise floor or not (they are hidden by default; you can see them by checking the "Coherence and Conditioning" and "Levels" checkboxes in the lower right corner of the System Identification tab). Just looking at the transfer functions I can see in the previous figure you sent, it does look pretty noisy between 600 and 900 Hz, so that could be the reason.

For a bit of background on the CPSDs and complex numbers in general (hopefully this is useful, because probably an issue report isn't the best medium for presenting this):

Power Spectral Density (PSD) functions represent the "power" at each frequency line in a given signal. As a simplified mental model of the PSD, you can kind of think of it as a product of two FFTs (with one being a complex conjugate) averaged over a number of realizations (note this is not completely correct, there's a bit more going on in terms of scaling, windowing, etc., see, for example the cpsd function in SDynPy for an actual implementation):

$$G_{XX} = \frac{1}{n}\sum_n{X_n X_n^*}$$

where $X_n$ is the spectrum (FFT) of a time signal for average $n$. This equation is performed frequency-line-by-frequency-line, so let's just consider one frequency line at the moment, recognizing that we would actually do this computation at all frequency lines. We might call this the auto-power spectral density (APSD) of $X$, because it computes the PSD of $X$ with itself.

On the other hand, we might compute the PSD between two different signals that we are measuring simultaneously.

$$G_{XY} = \frac{1}{n}\sum_n{X_n Y_n^*}$$

Here $X_n$ is the spectrum (FFT) of a time signal for average $n$, and $Y_n$ is the spectrum of a different signal for average $n$. We might call this the cross-power spectral spectral density (CPSD) of $X$ and $Y$, because it computes the PSD between $X$ and $Y$.

When we talk about a "CPSD matrix", that is just us putting together a series of PSDs between a series of signals into a matrix form. If we have 3 signals, $X$, $Y$, and $Z$, that might look like:

$$G = \begin{bmatrix} G_{XX} & G_{XY} & G_{XZ} \\ G_{YX} & G_{YY} & G_{YZ} \\ G_{ZX} & G_{ZY} & G_{ZZ} \end{bmatrix}$$

You can see the first index varies $X$, $Y$, $Z$ with the rows of the matrix, and the second index varies $X$, $Y$, $Z$ with the columns of the matrix. Note that the diagonal of the matrix then ends up being the APSDs, where we have PSDs computed from a signal with itself, and on the off-diagonals of the matrix, we end up having the CPSDs, where we have PSDs computed between two different signals. It pays to remember that while we are presenting this as a 2D matrix, it exists at each frequency line of your FFT, so it often gets stored in Matlab or Python as a 3D matrix of row x column x frequency line or frequency line x row x column. This is why if you read the rattlesnake user's manual, the specification for the random environment is passed as a 3D array, even if you only have one signal to control to.

Ok, so what does this mean? Recall that the spectrum or FFT of a signal is a complex number. It has a magnitude and a phase associated with it, or a real and an imaginary part, depending on how you would like to look at it. We like to consider the magnitude and phase aspects of the FFT because it lends itself very nicely to what an FFT represents: decomposing an arbitrary signal into a number of sine waves, and each of those sine waves has a magnitude and a phase. If you don't remember the conversions, I'll put them here. Mathematically, if you have a complex number (here we use $j$ as the complex variable $\sqrt{-1}$)

$$z = a + bj$$

the magnitude and phase will be

$$\left|z\right| = \sqrt{a^2 + b^2}$$

$$\angle{z} = \arctan(b/a)$$

We can remind ourselves just a bit about what complex mathematics means geometrically to develop more intuition about what's going on. When we multiply two complex numbers together, we end up multiplying the magnitudes of the numbers and adding the phases of the numbers. When we take a complex conjugate, we flip the sign on the imaginary part, or make it negative, which effectively also flips the sign on the phase. So if we put those two operations together, when we multiply a complex number by the complex conjugate of another complex number, we get the magnitude of the first number times the magnitude of the second number, and we get the phase of the first signal minus the phase of the second signal. This is kind of cool: we can compute the phase difference between two signals by multiplying two signals' FFTs together, with the reference signal being made complex conjugate.

So for the APSD calculation, where we compute the product of the FFT with its own complex conjugate, what we are really doing is squaring the magnitude of the signal, because the phase cancels out (you take the phase of the first signal and subtract the phase of the second signal, and because the second signal is the first signal, you end up with zero phase). If you have a complex number with zero phase, that also means that the imaginary part is zero, because the tangent of the phase is imaginary part divided by real part.

Let's return to our matrix form and see what that means. We noted that the diagonals are the APSDs where we have the PSD of the signal computed with itself. From the last paragraphs, we know that that this should have zero phase, and therefore zero imaginary part. Only the off-diagonal CPSD terms should have imaginary parts associated with them. If you consider your test with only one excitation signal and one response signal, your CPSD matrix will look like

$$G = \begin{bmatrix} G_{XX} \end{bmatrix}$$

for both the output (left plot on the test predictions tab) and the response (right plot on the test predictions tab)

In other words, there is only one signal, so the CPSD matrix is 1x1 at each frequency line, so there is only one term, and it is on the diagonal, so it should have no imaginary part. This should hold for both the real measurement as well as the predictions. If you were to ever find that there is a non-zero imaginary part on the diagonal of a CPSD matrix, it means that you've computed the CPSD matrix incorrectly, so it wouldn't be your test that's wrong, it would be your math that is incorrect. The only time you should see the imaginary prediction and imaginary specification lines pop up on the test prediction tab is when you have different channels selected as row and column in the channel selectors below the plots.

The last thing I'll do here is try to give a bit more intuition regarding what the off-diagonal terms actually mean. The diagonals are pretty straightforward. It's the average squared-magnitude of each signal in your test. But the off-diagonals are kind of abstract. Not only are we computing the product of two magnitudes, but there is some phase component between them. Then there is also the averaging across multiple measurement frames. Let's think through what this means.

First, let's consider two signals that are correlated. This means that there is some relationship between the two signals. Said another way, the phase difference between the two signals is always the same value. If you aren't seeing that immediately, consider an example thought experiment: say I knock on a door with my hand. Some time later the door vibrates at a given location on the door due to that knock. The time delay between the knock and the vibration depends on the geometry of the door and the wave speed, which depends on the material properties of the door, so it doesn't change with each knock. Therefore, we'd say that the response is correlated to the excitation; one depends on the other with a constant relationship. If I were to compute the FFT between my knocking force and the response vibration, you'd see a constant phase between the different frequency components with each knock. When we compute the PSD between these signals, the magnitude will be the average product of the magnitude between the knock force and the response FFTs. The phase difference, because it is always constant, doesn't affect the average. If this is still kind of abstract, reduce back to the real number case. For real numbers, the phase can either be 0 (i.e. a positive real number) or 180 degrees (i.e. a negative real number), and if the phase is always the same (say positive in this case), then you are basically just averaging a bunch of positive numbers. Therefore, for two signals that are completely correlated, we expect to see that the off-diagonal component of a CPSD matrix is equal to the square root of the product of the corresponding diagonals (since the diagonals are already magnitude squared).

$$G_{XY} = \sqrt{ G_{XX} } \sqrt{G_{YY} }$$

Now let's consider a case where two signals are completely uncorrelated from each other. For the thought experiment, this might be a case where I'm knocking on one door, but the measurement is occurring on a different door that someone else is knocking on, and there is no relationship between my knocking and the other person's knocking. In this case, there might not be any correlation between the force from my knock, and the response on the other door as it is responding to a completely different excitation signal. In this case, when computing the PSDs, for each average we will again have a product of the magnitudes of the two signals, but because there is no correlation, the phase between the two signals will be random. Again, we can consider just the real numbers, it would be like some of the values randomly having a positive sign, and some values randomly having a negative sign. So what ends up happening when we average together a bunch of measurement frames is that the signal will cancel out. The "positive" phase values cancel the "negative" phase values, and since there is no correlation, there will generally be an equal number of positive and negative values, so the average will tend to go to zero.

$$G_{XY} = 0$$

Finally, there is the middle case where the two signals are partially correlated. This could be a case where I am knocking on a door, but someone else is also knocking on the same door. The response on the door will then be partially correlated to the force from my knocking. The responses due to the other person's knocks will not correlated to my knocks, but the responses due to my knocks will correlate to my knocks. In this case, the phases will have some element of randomness to them, but will not be completely random. This can also be thought of in the case of real numbers where there will be some positive and some negative values, however there will generally be either more positive or more negative values due to the partial correlation, so it will never average completely to zero. However, some cancelling will occur, so the magnitude should never be greater than that case.

$$0 < \left|G_{XY}\right| < \sqrt{ G_{XX} } \sqrt{G_{YY} }$$

Therefore, in general for the off-diagonal CPSD terms, the magnitude of the CPSD will always be less than the square root of the corresponding APSD terms, and greater than or equal to zero. The phase of the quantity will be related to the preferred phase if there is correlation between the two signals.

One way that CPSDs are often represented is in terms of magnitude, phase, and coherence. The magnitude and phase are identical to what we've been discussing previously, and the coherence is basically a measurement of how correlated two signals are, and it is computed by basically determining the ratio of the magnitude of the CPSD is to the magnitude of the corresponding APSDs.

$$C_{XY} = \frac{\left|G_{XY}\right|^2}{G_{XX} G_{YY}}$$

From the previous equation, it should be obvious that a coherence of 1 means that the signals $X$ and $Y$ are perfectly correlated, because in that case, the numerator will equal the denominator, and a coherence of 0 means that you have the perfectly uncorrelated case where the numerator is zero. Coherence values between 0 and 1 denote how correlated two signals are.

Hopefully this is helpful to try to understand what a CPSD matrix is and why it is complex. It's how I think about it anyway. Like I said previously, there are some more implementation details than are shown here that I think would only confuse the issue in this discussion. If you are interested in learning more, I might suggest you download SDynPy and play around with it. There's an example problem on random vibration control that might help you understand the theory a bit better. I'll caution that we've done a lot of updating on the MIMO vibration functions in SDynPy, so some of the example problem might be a bit out of date, but I think overall it would still be helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants