Andrew Que Sites list Photos
Projects Contact
Main

August 01, 2015

Windowed Standard Deviation

I wrote previously about how a windowed average can be used to do filtering, and made mention that a windowed average also allows for the calculation of standard deviation. In this article, I will address some acceleration for calculating standard deviation.

The first question one might ask is what having standard deviation is good for when filtering data. Standard deviation in a filter is a reflection of how much noise there is on the signal being filtered. The higher the standard deviation, the less signal to noise ratio. Knowing standard deviation helps select a filter window size so the signal can be extracted from the noisy data. That can often be calculated without having standard deviation in the finial code. But an adaptive filter might use standard deviation to determine the filter window.

Standard deviation might be used be used to indicate the incoming data is becoming unusable. It might also be used to detect sudden changes in the value being filtered. If the average signal level changes abruptly, standard deviation will rise as the filter transitions from the old value to the new.

Here we have a windowed average of 5 samples. The signal change abruptly to 100, and then back to 0. The windowed average takes 5 samples to fully respond to this change. During that time, the standard deviation rises, and falls. In this way it can act as an indicator of such a change before the average reacts.

There are other uses for knowing standard deviation. By considering what standard deviation means about the data being acquired, addition uses can be conceived.

The classic equation for calculating standard deviation is as follows:

Here x is an array of data points, N the number of data points, s is the corrected sample standard deviation—what most people call simply call standard deviation, and x is the average of x. There is something one will notice about this calculation: the average must be known before standard deviation can be computed.

I've shown in previous articles a simple windowed average template class that can keep the average without having to do a complete calculation over the entire array. For standard deviation I didn't think this was possible because the average needed to be known before calculating standard deviation. Turns out I was mistaken. There is an iterative method. First, the iterative version of an average:

Then standard deviation sum becomes:

And standard deviation at any point n is:

That is nice for doing standard deviation in a single pass, but doesn't help us much with a continuous windowed standard deviation. In order to do this windowed, we need to be able to somehow subtract off older values from the running sum. This isn't possible here because of the divide by n-1 at the end.

Failing to find a solution to this, I went in search of an other method for doing windowed standard deviation and came across this article. The article presents this form for standard deviation:

This can be decomposed into two running sums:

Now that we have two running sums, we can make those circular.

Where w is the size of the window. We assume that an = 0 for all n < 0. That is, the circular buffers start filled with zero. This has an other advantage in that the average is kept along side the standard deviation.

Here is a template class that does both a windowed average and standard deviation. It is derived from my earlier work that just did the average. As before, the template can do any type of input as long as it supports basic arithmetic functions, and has an overloaded version of the square root function sqrt.

#include <cmath> // <- For 'std::sqrt'.

//==============================================================================
// Template class for keeping a windowed average and standard deviation.
// Assumes math operations on 'TYPE' are computationally expensive, so used
// sparingly.
// For standard deviation to work, sqrt must be specialized to handle the
// template type.
//==============================================================================
templateclass TYPE >
  class AverageStandardDeviation
  {
    protected:
      // Storage of all the data points that make the average.
      TYPE * array;

      // A representation of zero in the type of this template.
      TYPE const zero;

      // Index into 'array' to save next data point.
      unsigned currentIndex;

      // How many data points are currently in array.
      unsigned top;

      // Length of 'array'.
      unsigned length;

      // Running sum used to compute average.
      TYPE sum;

      // Running sum of squares used for standard deviation.
      TYPE squareSum;

    public:
      //------------------------------------------------------------------------
      // Create moving average with 'lengthParameter' number of samples to be
      // averaged together.
      //------------------------------------------------------------------------
      AverageStandardDeviation( unsigned lengthParameter )
      :
        zero( static_cast< TYPE >( 0 ) )
      {
        length = lengthParameter;

        // Allocate storage for
        array = new TYPE[ length ];

        // We need a definition of zero, so make a cast to get this.
        // NOTE: This is important if TYPE is truly a class.

        reset();
      }

      //------------------------------------------------------------------------
      // Reset all data in average.
      // NOTE: Average result will be the equivalent of zero.
      //------------------------------------------------------------------------
      void reset()
      {
        // Empty array.
        for ( currentIndex = 0; currentIndex < length; ++currentIndex )
          array[ currentIndex ] = zero;

        currentIndex = 0;
        top = 0;
        sum = zero;
        squareSum = zero;
      }

      //------------------------------------------------------------------------
      // Resize the number of samples used in average.
      // If the size becomes larger, then the average will not change until
      // more data is added.
      // If the size becomes smaller, then the last 'newLength' samples will be
      // copied to new average.
      //------------------------------------------------------------------------
      void resize( unsigned newLength )
      {
        // Allocate memory for new array.
        TYPE * newArray = new TYPE[ newLength ];

        // Reset sums.
        sum = zero;
        squareSum = zero;

        for ( unsigned count = newLength; count > 0--count )
        {
          unsigned index = count - 1;

          // Do we have data to copy from the old array at this index?
          if ( index < top )
          {
            // Get previous data index.
            if ( currentIndex == 0 )
              currentIndex = length - 1;
            else
              currentIndex -= 1;

            // Save data in new array.
            newArray[ index ] = array[ currentIndex ];

            // Sum this data.
            sum += newArray[ index ];
            squareSum += newArray[ index ] * newArray[ index ];
          }
          else
            // Set this position to zero.
            newArray[ index ] = zero;
        }

        // Delete old array holder and switch to new array.
        delete array;
        array = newArray;

        // Move top if the new length is lower than previous top.
        if ( top > newLength )
          top = newLength;

        // New length.
        length = newLength;

        // Set new index location.
        currentIndex = top;
        if ( currentIndex >= length )
          currentIndex = 0;
      }

      //------------------------------------------------------------------------
      // Return the computed average.
      // NOTE: Uses the data in the average thus far.  If average isn't full,
      // then only those data points in the array are used.
      //------------------------------------------------------------------------
      TYPE getAverage() const
      {
        TYPE result;

        if ( top > 0 )
          result = sum / static_cast< TYPE >( top );
        else
          result = zero;

        return result;
      }

      //------------------------------------------------------------------------
      // Return the computed standard deviation.
      //------------------------------------------------------------------------
      TYPE getStandardDeviation() const
      {
        TYPE result;

        if ( top > 1 )
        {
          using std::sqrt;
          result  = sum * sum;
          result /= static_cast< TYPE >( top );
          result  = squareSum - result;
          result /= static_cast< TYPE >( top - 1 );
          result  = sqrt( result );
        }
        else
          result = zero;

        return result;
      }

      //------------------------------------------------------------------------
      // Return the number of data points in the moving average so far.
      //------------------------------------------------------------------------
      unsigned getNumberOfDataPoints() const
      {
        return top;
      }

      //------------------------------------------------------------------------
      // Add new data to average.
      //------------------------------------------------------------------------
      void push( TYPE newData )
      {
        TYPE oldPoint = array[ currentIndex ];

        // Remove old data from average sum.
        sum -= oldPoint;

        // Remove old data from square sum.
        squareSum -= oldPoint * oldPoint;

        // Add new data to average sum.
        sum += newData;

        // Add new data to average square sum.
        squareSum += newData * newData;

        // Save new data.
        array[ currentIndex ] = newData;

        // Advance index.
        ++currentIndex;

        // If this marks the top...
        if ( currentIndex > top )
          // Update the top.
          top = currentIndex;

        // Wrap-around index.
        if ( currentIndex >= length )
          currentIndex = 0;
      }

  };
 

What's nice about this code is that adding a new sample to the average only requires a few operations. An array of the input data does need to be kept to it can be subtracted off, but both the average and standard deviation are kept as sums that can be quickly called upon for an exact result at any point. Standard deviation requires a few operations to translate the sums into a result, but this system is much faster than having to run through all the data points every time in order to get a result.

July 31, 2015

Low-Pass Filters vs Windows Average

Averages are used in software all the time as a low-pass filter. One common filter method is a windowed average. This is an average using only a specific number of samples. Each new sample replaces the oldest sample, and the average taken across this set of data. Consider a scenario where a new point of data comes in every second, and you need to keep track of the average of this sample for the past 5 seconds.

Data

Average

1004

 

1003

 

1003

 

998

 

995

1000.6

993

998.4

1001

998

1008

999

1000

999.4

998

1000

Here we see 10 samples. The average doesn't start until the 5th sample because we want an average of 5 samples. After the 5th sample, the average is just an average of the previous 5 samples. To implement this, typically a circular buffer is used. We keep 5 samples, and always overwrite the oldest sample when a new sample arrives. Here are the 10 iterations of a circular buffer from the scenario above:

 

1

2

3

4

5

6

7

8

9

10

1

1004

1004

1004

1004

1004

993

993

993

993

993

2

 

1003

1003

1003

1003

1003

1001

1001

1001

1001

3

 

 

1003

1003

1003

1003

1003

1008

1008

1008

4

 

 

 

998

998

998

998

998

1000

1000

5

 

 

 

 

995

995

995

995

995

998

Average

 

 

 

 

1000.6

998.4

998

999

999.4

1000

The new data is in bold. Notice how in column 6th, the first row has been replaced. On the 7th column, the second row replaced, and so on. To implement this system in software is very simple, and there are some tricks you can do to keep the average up-to-date without having to sum the entire array each time which I have written about.

A windowed average suffers from a couple of drawbacks however. The biggest is the need to keep an array of all the previous samples. In a small processor one might not have enough memory to keep the array needed for a windowed average, especially if there are a lot of samples.

When doing filtering this can be solved instead using a first-order low-pass filter. In the past I've written about using a software low-pass filter. I most often use this filter when reading what should, for the most part, be a DC value from an Analog to Digital Converter (ADC). Noise in this signal can be introduced for a number of reasons, but this simple filter is a very inexpensive way to take the noise out.

Here is the function for a first-order low-pass filter:

On = C In + ( 1 – C ) On-1

Where I in an array if input data, O is the output, and C is the filter coefficient with a value between 0 and 1. In implementation it is easier to think of this low-pass filter a poor-mans windowed average. Rather than think of a filter coefficient, consider the number of samples that will be averaged together. In this scenario, the equation becomes:

On = In + On-1 - On-1 / N

Where N is the number of samples. This acts like a windowed average over N samples—sort of. The difference is most apparent when the input makes a sudden change. Take for instance this setup:

Here is an average of the last 5 samples using both the windowed and low-pass methods. After 5 samples, the data makes a large change. The windowed average goes to the new values after 5 samples, where the low-pass filter takes much longer. Theoretically, the low-pass filter will never reach the new value as the new value will become an asymptote. If using integer numbers, it takes about 20 samples for this scenario. The larger the change, the longer it takes for the low-pass to reach this new value. When doing a software filter, this is not typically a problem—just something that must be accounted for when selecting the value for N.

One nice thing about using a software low-pass with N rather than a coefficient is the ability to avoid taxing operations like multiplies and divisions. If N is a power-of-two, the divide becomes a right-shift. This allows for a heavy filter coefficient, like 256, with barely any overhead. Here are the two lines of code needed to add a new sample to this average:

averageSum -= averageSum >> AVERAGE_SHIFT;
averageSum += newSample;

When the average is desired, it is simple the average sum divided by the number of coefficients.

average = averageSum >> AVERAGE_SHIFT;

Getting this average started often involves forcing the average sum to reflect the first sample. That is, force the average to be whatever the first sample is.

averageSum = firstSample << AVERAGE_SHIFT;

Since the low-pass filter takes times to react to changes, this allows the filter to start much closer to the actual average—assuming the first sample is fairly close to the average.

In addition to considerations about how much filtering is being done, the other factor is integer size. For example, if one wants a filter on 10-bit ADC data, if the average sum is 16-bits, then the value for the shift cannot be larger than 6, or N=64, without overflowing the sum.

The biggest benefit of using using a software low-pass filter is that it only requires storage for an accumulator no matter how many samples are to be averaged together. Together with using a shift for a divide, it is both small and fast. This makes it very useful in small microprocessors, and high-speed digital signal processing.

There are times when a windowed average is better than a low-pass filter. They work better in situations when the average value can change abruptly to a new steady state because they can react faster to the change. However, there is often a way to account for this using a low-pass filter as well. Sometimes an abrupt change can be anticipated, and the average reset when it takes place. For instance, if monitoring a voltage that can be switched from a low level to higher level, the program might know this change was going to take place (perhaps because the software initiated it). When it happens, the low-pass sum is reset with the first-sample method after the change has taken place. This allows the filter to react quickly to the change. This also works with the windowed average.

I personally use a windowed average when I have plenty of memory to work with if for no other reason than calculating the coefficient in meaningful terms is easy. For instance if one has 1 sample/sec, and averages 10 samples together, the filter is a 10 second average. With a low-pass filter this is harder to quantify like this. Keeping a windowed average also lets one keep additional statistics such as standard deviation, which we will address in the next article. When memory and/or speed are key, nothing beats a power-of-two based low-pass filter.

   My new microcontroller board arrived.  This uses an Atmel Xmega128A4U CPU.  I've come to really like this line of processors as I have used them on a couple of projects at work.  Most of the hobbyist community like Mega series processors from Atmel, but the Xmega series are more powerful and have better integrated peripherals.  I took it to work just to verify I could get it programmed from Atmel's standard compiler suit.  While I like the hardware, the software libraries leave much to be desired.  Luckily, I have worked with them enough on the job I can get around using most of them.  I don't get how they could have made hardware so good, but a software interface so bad.  Anyway, I have the board blinking an LED at an exact frequency, which means it is doing what I expect.  So time to start get it to work at home.
   An other swimming day, and an other personal record: 700 yards in 9 minute and 21 seconds.  That was my only accomplishment to mark for the day.  I have been doing so well with my front crawl using pull buoy mainly because I wanted to work my legs and arms separately.  Since I have felt I have been improving my front crawl endurance, I thought I would try without using the buoy today.  It didn't work at all.  I did about less than 10 laps before I was exhausted and had to stop.  I think I will have to have a longer pause between working on my legs and my un-buoied front crawl.  Latter in the session I went back to doing laps and felt a lot better about my progress.  So I think more recovery time is going to help.
   I did a 12.28 mile loop this afternoon in 52 minutes.  This makes an average of 14.11 MPH, which isn't too bad considering the loop contained a one of the most brutal hill climbs in the area.  My phone tracking software messed up about a mile into the ride, and I had to restart the tracking about a 2 miles latter when I realized it wasn't running.  I've found having a lot of negative thoughts on the mind makes one push yourself harder.
   Went out for coffee downtown Madison with my roomies.  I biked down and met them there.  The ride is just under 7 miles (each way) and goes pretty quick, but sadly this is all the biking I've done all weekend.  We all intended to bring computers and do some writing.  Steve and I were lost in our computers in no time.  Xen couldn't find his.  And poor Zach spent the entire time waiting for his to update the non-Linux OS.
   My work ride today was almost perfect, and would have been if a section of the Capital City Trail hasn't been closed.  The ride in was 14.9 miles and I made really great time; 1 hour and 3 minutes.  This gives an average speed of 14.8 mph.  Going through town I was sheltered from the southern winds, but I suspect they were more from the west than the forecast predicted.  On the ride back I had a fairly strong wind from the south.  I traveled with two stops: one for lunch and the other for desert.  My total distance was 20.4 miles over 2 hours, 23 minutes.  My tracking application states my average speed (excluding stops) was 11.82 mph—significantly slower than my ride into work.  I think desert was a bad idea as I lost all drive after this point and traveled significantly slower.  The last part of the trip was much slower due to the strong south-west wind, which I had to face more or less head on on the country roads on the north east side of lake Mendota.
   The Octagon house layout is fairly efficient.  There are 4 square large rooms on each floor, and 4 triangle small rooms between each of the square rooms.  The triangle rooms provide storage.  The roof of the house is flat, but slops inward.  This slow allows rain water to be collected in a cistern on the 3rd floor.