Welcome back to Vibrations All Around Us! A blog series investigating the Digital Signal Processing (DSP) algorithms used to detect, measure, and extract meaning from the vibrations happening around us. This is an important post as we will be introducing the fundamentals of Spectral analysis, also known as the Frequency Domain, and one of the most important algorithms in engineering: the Fast Fourier Transform (FFT).

In the previous blogs, we looked at signal intensity (acceleration magnitude) with respect to the *time* that this signal intensity occurred at. It’s also possible for us to observe signal intensity with respect to the *frequency* that this signal intensity occurs at. To move from time to the frequency domain, we can use the Fast Fourier Transform. Let’s imagine a signal that’s composed of 2 cosines:

y(t) = 3cos(2\pi t * 10 - \pi /2) + 8cos(2\pi t * 4)

We can summarize the energies in the signal as the following:

- A 10 Hz. energy with amplitude 3 and a phase of -90°
- A 4 Hz. energy with amplitude 8 and a phase of 0°

Let’s use the Fast-Fourier Transform algorithm (FFT) to convert to the frequency domain using the time series of the signal. We can calculate 2 plots from the FFT, Amplitude Vs. Frequency and Phase Vs. Frequency. We can use FFT and python to generate the time series of the two-cosine equation outlined above, take the one-sided FFT of the time series signal, and plot the *amplitude* and *phase* of the frequency domain conversion.

We see that the FFT generates frequency domain plots which confirm our knowledge of the signal as we had outlined in our “energy summary”, the 2 dominant frequencies at 4 and 10 Hz with amplitudes of 8 and 3, respectively, and a phase of -90° and 0°, respectively. You’ll notice that there are non-zero amplitudes at the frequencies *around *4 and 10 Hz. This is called “spectral leakage”, and can be expected when a non-integer number of periods are processed through the FFT.

You’re probably thinking “what’s the use of being able to express a couple of sinusoids in this fashion? In the real world, signals don’t take the form of nicely sculpted sinusoids.” The power of the FFT is that it can be scaled up to *any* number of sinusoids as long as your sampling frequency is twice the highest frequency you are analyzing! Fourier stipulated that any waveform could be expressed as an infinite number of sinusoids of varying amplitude, frequency, and phase – meaning we can generate these plots for *any* waveform. That means that the FFT can be applied to images to identify color frequencies, financial data to identify spending/cost seasonality, audio information to identify pitches, and accelerometer data to identify mechanical vibrations.

I’ve stuck the Nicla to the top of a small fan sitting on my desk. The fan blade that spins is going to have some small imbalance in it that will produce a centrifugal force on the housing, causing it and the Nicla’s accelerometer to vibrate at the frequency the fan blade is spinning at. I’ve taken 10 seconds of data while the fan is spinning. If we take the resultant of the accelerometer signal and plot the time series, we won’t be able to make much meaning out of the garbled mess, but if we instead take the FFT and plot magnitude vs frequency, we should see that the body of the fan shakes at the frequency that the fan is spinning at. Remember that sampling rate in the time domain defines how finely a signal is converted into discrete time intervals and sampling rate in the frequency domain affects the range of frequencies that can be accurately represented by the FFT meaning we will be limited to analyzing half our sampling rate (Nyquist).

This is not going to look as pretty as the FFT of *y(t)* that we just did, because of noise which is present at a large number of frequencies and because the mechanical vibrations of my $20 desk fan are not going to vibrate at pure tones. Nonetheless, it’s obvious that there is a high energy signal at 45 Hz, indicating that the fan is probably spinning at this frequency. Let’s increase the speed of the fan by switching it from the “White Noise” to “Refresh” setting and create the same plot.

The fan’s speed has increased from 45 Hz to about 50 Hz. Not only does the fan’s speed (dominant frequency) increase, but we also see the dominant frequency has a larger amplitude. The amplitude has more than doubled from only a 5 Hz speed increase, which could be the result of a couple of factors:

- In fluid mechanics, forces can grow exponentially with respect to speed changes.
- The structure that houses the rotating fan blade could have a vibrational harmonic around these speeds (more on this in the next blog).

The FFT algorithm is helpful for visualizing stationary signals, where frequency energies aren’t changing over time, but when signals aren’t stationary a spectrogram is preferred. The spectrogram batches the signal and performs an FFT on each batch, then plots the FFT’s w.r.t. time using color intensity to indicate amplitude for frequencies of interest.

My desk fan has 4 different speed settings. I took a 45 second clip with the Nicla in the same position. Over the experiment, I increased the speed every ~10 seconds. We can then create the spectrogram of this experiment using the following scipy function.

The spectrogram displays 3 Dimensions of information in a 2D plot, those are

- Time on the x-axis
- Frequency on the y-axis
- Amplitude (or phase if desired) as color intensity

Every 10 seconds, the desk fan is moved to a higher speed (marked by the red lines). After each increase, we can see that the dominant frequency moves to a higher speed and that the amplitude of the dominant frequency is also increasing. Great!

Now that you’re armed with one of the most powerful engineering algorithms in recent history, we’re going to explore a fun real-world application of FFT in the next lesson by making a musical tuner. See you then!

]]>To get some inspiration for how we might build the pedometer, let’s first record some data with the Nicla on my shoe. I’ll go for a short walk with some pauses and turns in the middle. I took a total of 36 steps over the course of the experiment. Recall that resultant acceleration is calculated by taking the magnitude of the vector formed by adding the 3 acceleration components, for more information refer to our previous post to post #2 on Tools Setup.

Let’s zoom in on 3 steps that I’ve taken on my right foot, or 6 steps in total with both feet.

The first spike comes from picking my foot up off the ground and accelerating it forward in front of me (orange), the second spike is the impact of my foot striking the ground (purple), and in between these 2 events there is a slight ‘weightlessness’ while my foot slows down and falls before making contact with the ground (yellow). Finally, while my left foot is in contact with the floor and my right foot is taking a step, the resultant acceleration is very small (light blue). Intuitively, we could write an algorithm that identifies periods of high activity (green), then check that the waveform in this period of activity looks like how we expect a “step” to look. Let’s write some pseudo code for this that will work in real-time, I’m going to be using variance to measure stillness (see Blog 5 for a refresher on this):

- Wait for a period of high activity (wait for the windowed variance to exceed a threshold).
- Wait for the foot to leave the ground (wait for the resultant acceleration to exceed a threshold).
- Wait for 0.25 seconds.
- Wait for the foot to impact the ground (wait for the resultant acceleration to exceed another threshold). Step count += 2 and return to step 1.
- Wait for a period of low activity (windowed variance dips below a threshold).

I’m going to use the Moving Average Filter from Blog 4 in this series to smooth out the high frequency noise, which will make the waveform in the previous graph look like this:

From looking at the time series data, a safe acceleration threshold should be 2 m/s^2. We can also assume that 0.2 seconds of wait time after the foot-leaving-the-ground impulse is enough for the resultant acceleration to return below the threshold. I’ll be using the moving variance window from the previous blog for this implementation, so please refer to that if you haven’t done so already. To manage the decision making process of our algorithm I’m going to be utilizing a state machine structure. For a more complex system, where lot’s of other processes might be running, a state machine makes for a very readable program structure and allows us to easily scale and increase the complexity of our pedometer with few changes to the main running loop. Using a state machine allows us to identify key characteristics of each footstep such as foot liftoff and foot impact events.

In this example, the program reads 32 steps, meaning we missed 4 steps that were taken. The misses came during periods of odd movements such as turning steps or stopping steps. For someone going for a walk in the park these “special” steps are going to represent a pretty small number of the fraction of total steps taken, giving our algorithm pretty good performance. For someone who is walking in a more hectic environment we might need to such as a busy city we might need to improve our algorithm. Of course, in a wrist-worn device, like a Fitbit, more advanced algorithms are used to intelligently separate steps from background noise. In fact, many smart wearables have been adapted to count steps including wrist trackers, wrings and headwear! If you are building this type of product, please Contact us and we can expertly tailor a step-counting algorithm towards your application!

Not too bad! In the next lesson we’ll be going over the Frequency domain and Fast-Fourier Transform.

]]>Welcome back to Vibrations All Around Us! In this blog post, we are going to be building an algorithm for a wearable device to rank your activity levels throughout the day. Let’s get into it.

When we think of vibrations, we typically think of high frequency movement occurring over a short period of time, but we can also observe vibrations over the course of minutes or hours. When it comes to looking at activity levels, we are interested in vibrations that occur over a very long period of time. I’m going to hold the Nicla in my hand to emulate a wrist-worn wearable device and perform a series of tasks over 2 minutes. These tasks will go from highly active to low activity. During the experiment, I will do the following for 30 seconds each:

- Sit-Ups
- Pretend to play a racing game on my phone
- Browse social media laying down
- Lay completely still (fake nap)

Let’s take a look at the time series to see what we’re working with. For this blog, we’ll be using the resultant acceleration (look at Blog 2 for a refresher on this) to perform our analysis, but for a more complex system you would preserve and analyze individual components of the sensor. Below is plot of the resultant where I’ve added red lines to mark the points where I switched activities in the experiment:

In moments of high activity, we as humans are likely to experience large impacts, such as when running and moving heavy objects. Because of this, we can characterize active moments in time as time periods with large spikes of acceleration, or periods where acceleration has a high *variance*. Variance is a measure of dispersion of values from their mean. The formula is as follows:

S^{2} = \frac{1}{n-1}\sum_{i=1}^{n}{(x_i}-\bar{x})^2

where *S ^{2}* is the sample variance, x̄ is the mean of the sample, n is the number of samples in the set, and

- Choose variance bins to correspond with activity levels
- Window the last X seconds of acceleration data and take the variance of the resultant over the window
- Assign to an activity level based on the variance bin values
- Wait for X seconds, return to 2.

First, let’s decide on how often we want to generate an activity score – I’ll set a parameter to generate one for every second and then create a 5 second window to calculate variance from. This might be familiar to you from our previous lesson, where we applied another FIR filter, the moving average filter. This will be much like like the MAF from the previous lesson, except we won’t be calculating an output at every time step to save memory and because we don’t need an activity score at millisecond resolution throughout the day (I don’t need this level of scrutiny in my life).

After visually inspecting the variance changes between stages, we can use the following variance values compared against a threshold to assign an activity score from the variance values. Let’s say that an activity score of 1 is being completely still, and an activity score of 4 is actively exercising. I’ve chosen the following values

Score | Variance Range [(m/s^2)^2] |

4 | 3e-3 < X ≤ ∞ |

3 | 4e-4 < X ≤ 3e-3 |

2 | 4e-5 < X ≤ 4e-4 |

1 | 0 < X ≤ 4e-5 |

This looks OK, let’s make the variance calculation non-causal, centering the values that the variance is calculated at to remove any delays, and then let’s add in hysteresis to remove any jittering effects.

Not too shabby! Keep in mind that the chosen variance range values will need to be tuned depending on the location of your wearable device. Of course, in the real world, users are not running a nice controlled experiment, so other algorithms which *learn* the user’s behavior would be much more successful. Think Circuits would be happy to build those into your next activity tracking product!

See you next week, where we’ll be doing an important lesson on the Frequency domain and the Fast-Fourier Transform.

]]>Welcome back to the Vibrations All Around Us blog series, where we will be learning about fundamental signal analysis algorithms and using them for real-world applications. In this blog post, we will develop a crude leveler. The goal of this lab is to be able to move the Nicla chip around in space, and get feedback once the chip is within 1° of level. We will use the fact that the accelerometer is sensitive to gravity, and align the Z-axis of the accelerometer with gravity. Let’s get into it.

At it’s core, to make a leveler we need to check the angle formed between the angle formed by the Nicla and the unit Z vector. Here’s what our leveler psuedo-code looks like:

- Calculate the angle formed between the Nicla and the Unit Z vector
- Check that this angle is below 1°

A naive solution would be to simply check for With a 3 axis accelerometer, we should be able to look at a single axis, in this case the Z axis, and once it’s above a threshold angle, we can say the Nicla is level. The threshold for degrees converted to radians is:

1\degree \frac{\pi}{180} = .017 \text{ rad}

Then, we can determine what that gravity threshold

1 \text{G} * cos(0.017 rad) = 0.9939 \text{G}

So whenever the Z-acceleration is less than 0.9939 the device isn’t level and when the Z-acceleration is greater than or equal-to 0.9939 the device is level. This would work if we had a perfect accelerometer with high resolution and no scaling errors (1G doesn’t equal 4096 counts). Let’s make a more robust leveler that takes advantage of the other 2 axes. Let’s check the angle formed by the *resultant* acceleration against the unit vector. We can take the angle between 2 vectors using the following formula:

\theta = cos^{-1}(\frac{a \cdot b}{\left| a\right| \left| b \right|})

where *θ *is the angle and *a* and *b* are vector 1 and 2, respectively.

Better? Well, not quite. All signals in engineering applications are contaminated by some level of *noise*, which refers to undesired and random variations in data that don’t carry meaningful information. Accelerometers are notoriously noisy instruments, and we have a pretty small margin for error, so it’s likely that our current algorithm will give us a flickering level-to-non-level signal even though the chip might be perfectly level. Let’s give it a shot. I’ve done an experiment where I have slowly rotated the device around the “level” position, moving it in and out of the level state. If I run the previous code on my experiment I see the following:

and let’s take a closer look at the region that’s close to the level mark:

I want to manufacture a leveler using this algorithm and sell it to the public. The device will have a LED on it that flips between red and green when the device is non-level and level, respectively. I’m going to have a lot of 1-star, “wouldn’t recommend” reviews from frustrated customers that are going to buy a device which never shows a solid green light even if they have it on a perfectly level surface!

In reality, Digital Signal Processing (DSP) is needed to have an acceptable performance. In the below plot we have filtered the raw angle signal using DSP which should solve the problem we mentioned above. Of course, were this a real product, there are several other sources of error that should be addressed, if you’d like to know more, please reach out!

A finite impulse response (FIR) filter is an algorithm that filters a signal based on a finite history of the signal. The moving average filter implemented in this blog post is FIR, because the output of the filter is only dependent on the previous *b* signal points, any signal history beyond this is “forgotten” to the filter. An Infinite Impulse Response (IIR) filter, is an algorithm that filters a signal as a function of the entire history of the signal. An example of an IIR Filter is the Exponential Filter, which is written as follows.

y(t) = (\alpha - 1) * y(t-1) + \alpha*x(t)

Where falls between 0 and 1. Unlike FIR filters, the output, *y(t)*, of the filter is dependent on* previous outputs* of the filter. The reason this is considered as an “infinite response” is because the above equation can be rewritten as a weighted sum of previous points:

y(t) = \alpha \sum_{n=0}^{\infty}{(\alpha - 1)^{n} * x(t-n)}

In other words, the infinite response filter never forgets! Ironically, the Finite response filter is typically more memory intensive, as the signal history for the entire length of the filter must be remembered. In this example, the exponential filter only needs to remember the filter’s previous output (you would implement this using the previous equation).

Congratulations, you have completed the first steps of building a leveler. Speaking of steps, don’t forget to join us next week, where we will be building a crude pedometer to demonstrate some time-series techniques. See you then!

]]>Before we get started with utilizing our hardware to perform vibration signal analysis, we need to understand some key definitions from the world of Digital Signal Processing (DSP). In this post, we’ll be examining the difference between the Continuous and Discrete Space, Nyquist Theorem, Real-Time vs. Batching Techniques, and Causal Vs. Non-Causal Systems. Be sure to start at the beginning if you have just found this blog series!

When working with digital measurement devices, we are limited to gathering information in “discrete time”, that is, we can only take a finite number of measurements within a time period. This is usually limited by the response time of the sensor taking measurements. Of course, the real world doesn’t have discrete time but is “continuous time”. In reality, between any two time points, there exist an infinite number of other time points. This is a bit of an abstract concept, but as long as you understand that we have a limited number of samples per time period and that the discrete space has implications on our analysis, you’ll be OK. One of these implications is that, as DSP engineers, we need to abide by the Nyquist Theorem.

The Nyquist Theorem states the following:

f_{m} = 2 f{s}

Where f_{m} is the minimum sampling rate required to measure frequencies of f_{s}. If the sampling frequency is less than f_{m} you’ll experience a phenomenon called *aliasing*, where the frequencies greater that f_{s} can be interpreted as lower frequency signals. This will limit the frequencies that can be measured by the Nicla to < 62.5 Hz, because it is capable of sampling at about 125 Hz. While 2x is the theoretical minimum. In engineering applications, we usually aim to sample at a frequency of >10x the frequency being measured depending on the application.

In our analysis, we will be utilizing real-time and batching techniques. In real-time systems, data is processed immediately, with low-latency, usually to perform some sort of control action that uses feedback. For example, an airbag deploying upon a vehicle crash or a gimbaled thrust of a rocket adjusting a rocket’s angular rotation are examples of real-time systems that require rapid signal processing to perform a control action. In less time-sensitive systems, batching can be used, where the entire history of a signal can be analyzed. This usually has reduced computational requirements as time is less of an issue, and allows for the implementation of algorithms that are more effective on batched data. During this series we will be exploring examples of both real-time and batched systems!

A causal system is a system whose state depends on only current and past outputs. A non-causal system is a system whose state depends on *future* inputs. Even though there are no known non-causal systems in the real world, there are analysis techniques which work by going *forward* and *backwards* in time. For example, filters often introduce a delay, so to mitigate this effect filters can be applied both forwards and backwards in time. This will have the effect of *future* data points having an effect on past data points, which is unintuitive. Consequently, most real-time algorithms need to be more or less causal systems, otherwise they would have to store significant blocks data to process in batches. Regardless, non-causal algorithms have an important role in signal analysis, especially in filtering and batch processing.

Now that we have the definitions out of the way, let’s explore some real world applications for these systems. In the next blog, we will be making a crude leveler to help you make sure things are flat. Stay tuned!

]]>In this post of the Vibrations All Around Us blog series, we will be discussing the setup of the Nicla Sense ME microcontroller that will be used for measuring vibrations in all of the future labs in this series. For those of you completing the labs with us, this will be an important lab for you as this is the tool we will use for all of the following experiments. If you are just finding this page, be sure to start from the beginning introduction to the series.

To test out the board’s sensors, you can follow the instructions in this example from the Arduino website to install the appropriate libraries and get a basic understanding of the sensors available on the board. We’ll only be utilizing the boards’ accelerometers for this blog series so I’ve adapted the sketch to only publish accelerometer information to increase the rate that data can be transmitted over Bluetooth Low Energy (BLE).

We used a Python script that has been adapted from Bleak (an SDK for Python BLE connections) to read the accelerometer information published by the board over BLE connection. We will be using this script extensively over the course to log data. Within the script, the Service and Characteristic Bluetooth UUID has been set to match the UUID’s in the Arduino script, so if you change one, make sure you change the other. Finally, you can choose the time of data capture and the location/name of the data file.

Let’s get some data now! When you run the script, you should see the terminal print the accelerometer data packets as they’re received. After the number of seconds specified by *runtime*, the script will automatically terminate and print a message print a message indicating the publishing speed of the data:

X accel: 1167.0, Y accel: 37.0, Z accel: 3948.0, time: 4540400.0 X accel: 1175.0, Y accel: 34.0, Z accel: 3956.0, time: 4540408.0 X accel: 1165.0, Y accel: 48.0, Z accel: 3945.0, time: 4540416.0 X accel: 1169.0, Y accel: 38.0, Z accel: 3958.0, time: 4540424.0 X accel: 1179.0, Y accel: 53.0, Z accel: 3958.0, time: 4540432.0 X accel: 1171.0, Y accel: 37.0, Z accel: 3959.0, time: 4540440.0 Accel data gathered for 10 s at frequency of 112.3 Hz

For the first couple of seconds the Bluetooth connection is slow, but we can expect a consistent 125 Hz sampling rate after the first 2 seconds.

A couple of things to note:

- The data coming from the Nicla’s accelerometer is scaled by 4096 counts per unit of gravity. Therefore, to get real world units of gravity we can divide the reported values by 4096.
- We are deleting the first 2 seconds of data. The BLE connection is usually poor for the first couple of seconds while establishing a connection. Make sure to account for this when you are setting your experiment length.

To verify that we are gathering meaningful data, I have taken a 10 second data capture where I am holding the Nicla and shaking it along its X, Y, and Z components in that order (these directions are indicated on the physical board). Let’s add another helper function to utils.py which allows me to plot the individual components of the accelerometer.

You should see the distinct components shaking like so. If your plot doesn’t look like this, just double check that you’ve shaken the device along the correct components. Let’s take another test where I’ve let the board sit completely still and change its orientation every few seconds. If we take the resultant acceleration measured, which is the magnitude of the resultant vector from adding the 3 acceleration components, we should see the value be close to 1 G of acceleration while the board is sitting still since the only force acting on the object is the force of gravity. The resultant is calculated as follows:

resultant = \sqrt{x^{2} + y^{2} + z^{2}}

We can see that in between the periods of movement, the accelerometer rests at a steady 1 G. ( the large spikes are me adjusting the board to different orientations). This value will likely be slightly off due to variations in temperature which the accelerometer is sensitive to. There are techniques to compensate for the effect of temperature on the accelerometer, but we don’t need that level of precision for this blog series.

In the next post, we’ll be looking at Aliasing, Real-Time/Batching Techniques, and Causal/Non-Causal Techniques.

]]>