(Formula in left-hand column is pseudo-C. Right-hand column is pseudo-LaTeX.) Zorica, I hope this helps. Here's the theory as I've been explaining it to you all semester. :^) I added the non-recursive filter per Dr. Huber's suggestion. It's kind of short, but there's not much to say without actual experience. ____________________________________ Signal conditioning and calibration. 1. Signal conditioning. The signal arriving from the Argus AD contains an undesirable noise component that must be eliminated via a low-pass filter. For the sake of flexibility, this filter will be implemented in software. It will live in either ss-hw or ss-core (probably the latter). One simple low-pass filter, the recursive low-pass, is as follows: y = (1 - a) * y' + a * x; y = (1 - a)y' + ax where: y = output data (filtered sample) y' = previous output x = input data (unfiltered sample) a = constant between 0 and 1 This filter does a fairly good job of "blurring" the signal to reduce the impact of high-frequency noise. The value of 'a' is most easily selected via trial-and-error, but it can probably be calculated by carefully observing the variance of the unfiltered signal and applying some magical math formula I either forgot or haven't seen before. The low-pass doesn't have to be recursive; it can do its job given some history of the input signal. for (i = 0, y = 0; i <= n; ++i) y = \sum_{i=0}^{n} a_i x_i y += a[i] * x[i]; memmove(&x[0], &x[1], (n - 1) * sizeof(*x)); where: a[i] = averaging weight #i x[i] = input sample #i n = width of window Again, the values for 'a_i' can be calculated by trial-and-error. I think a Gaussian curve would be about right, but don't know specifically which parameters to use. I think all this stuff is in the Data Acquisition book... 2. Applying calibration. The filtered data signal is only useful if the program knows how to interpret it. The simplest way to do this is via a lookup table of calibration points, using linear interpolation between points. The math: x = (y - y0) * (x1 - x0) / (y1 - y0) + x0; x = \frac{(y - y_0)(x_1 - x_0)}\ {y_1 - y_0} + x_0 where: x = force applied to sensor y = filtered sensor output (x0, y0) = calibration sample with maximal x0 less than or equal to x (x1, y1) = calibration sample with minimal x1 greater than or equal to x x0 = known force associated with output value y0 x1 = known force associated with output value y1 Graphically: y 255 |. . . . . . . . . . . ...o |. . . . . . . . .o'''' . | ..'' . . y1......|.........o'' . . 127 | .' : . . y......|....x' : . . | .': : . . y0......|.o : : . . 0 |_:__:____:_______._______._____ x 0 : 1kg 2kg 3kg : : : x0 x x1 Not very hard. Using higher-order interpolations (cubic, etc.) would yield a smoother curve, but unless the sensors have a markedly nonlinear response there would be no benefit to implementing the interpolation. But, this simple algorithm cannot handle non-monotonicities in the table. By this I mean the case of the sensor yielding the same 'x' value for different (non-contiguous) 'y' values. I'm not sure how to algorithmically handle this. It would probably involve closely tracking the signal and using information from other pressure sensors to guess the weight on the sensor in question. It would never be optimal, and it would be a waste of time to implement--- it would be best to simply discard and replace the faulty sensor or ADC. 3. Acquiring calibration. The following process is repeated for each of the four sensors in the shelf. The sensor is calibrated for a series of weight values, including 0g. It's probably sufficient to test for only three points, but additional cannot hurt since it provides a closer approximation of the true response curve. The cabinet is emptied, then each weight in turn is placed directly above the sensor. The software waits for the user to signal that she is finished placing the weight and has removed her hand from the cabinet. Then it waits a little further for the signal to settle and to get an average of many samples. This average sample value is added to the calibration table corresponding with the known weight. It's important the sensors are also calibrated for zero-load, that is, with no added weights on the shelf. Since the shelf itself exerts considerable pressure on the sensors, there will be a bias in the sensor output, and this bias must be known for the linearization algorithm presented above to work. If in the course of calibration a non-monotonicity is detected, the calibration utility will ask the user to try again. After enough repeated failures, the conclusion is the sensor or the ADC is faulty and needs to be replaced. One possibility for enhancement is for the calibration utility to measure the variance of the signal and to use this information to calculate the optimal 'a' values for each sensor. If different weights yield significantly different variances, then the 'a' values can be placed in the calibration table to be interpolated by the linearization routine. The 'a' values corresponding with the previous output weight (y') would be used. But if the sensors are more-or- less uniform with regard to noise (which they should be since they all use the same Argus AD device), a single 'a' value (or set of 'a' values, for the non- recursive filter) can be used for all samples read on all sensors. 4. Our reality. Unfortunately, with weights so small compared to the full range of the sensor (25 lbs), it's pretty difficult to test any of the above theory, because the sensor is simply not capable of telling the difference between, say, 500 g and 520 g. Therefore we haven't been able to experiment with the formulas to determine what 'a' value(s) to use or whether to use the recursive or non- recursive low-pass filter. Hopefully the 1-lb sensors will come in soon... vim: set et: