Spectral characteristics
The group was interested in whether a spectrogram might indicate characteristics of the combined signal in the frequency domain that differed in some way from the pressure signal. Figure 3
a displays a spectrogram of light intensity corresponding to light intensity I(t). Figure 3
b shows a spectrogram of the blood pressure signal, P(t).
It is difficult to conclude anything from these spectrograms. The periods in both do not appear to vary much, although the blood pressure period does appear to vary from beat to beat in the time series plot. There is a more apparent jump in the colors of the blood pressure spectrogram at times that correspond to the rapid rise in pressure associated with systole, compared to the gentler sine-dominated oscillations of I(t).
Spectral techniques are designed for linear systems and rely on a frequency difference. So next, we attempted a phase plane reconstruction [10] to see how we could leverage differences in temporal structure of the smooth gait and sharper blood pressure. Such techniques are motivated and based on the theory of dynamical systems, and the time evolution is defined in an appropriate phase space.
Phase space reconstruction
Our time series may be regarded as a sequence of measurements obtained from a dynamical system. The established approach is to embed the time series onto a trajectory in a finite dimensional space. It has been demonstrated under quite general circumstances that the reconstructed trajectory is topologically equivalent to the trajectory in the unknown space in which the original trajectory is living. The particular method used here is the method of delays, based on the idea of a delay register [10, 11].
To do this, we choose an embedding dimension and a time lag or delay. Embedding theorems guarantee faithful reproduction of the trajectory if the embedding dimension is larger than twice the number of active degrees of freedom, regardless of how large the dimensionality of the true space is. The delay is not the subject of the embedding theorems since they consider data with infinite precision. In practice, the time delay must be found by experimentation: no rigorous way of establishing its optimal value has been determined [8, 12].
If the delay is small compared to the intrinsic time scales of the system, successive elements of the delay vectors are strongly correlated. However, if it is too small, there is almost no difference between the different elements of the delay vectors; if it is large enough, the different coordinates may be almost uncorrelated, or independent, providing a topologically correct view of dynamical behaviour. Delays are fed into the register and propogate sequentially until they are lost at the other end n clock cycles later. At any instant, the register thus contains n consecutive data values (v
i
,v
i−1,…v
i−n
).
Each sequence of data points can be thought of as an n-dimensional vector, usually written as a column vector. The sequence of n-vectors generated by clocking the data through the delay register can be thought of as a discrete trajectory in an n-dimensional Euclidean space. If the signal arises from a finite-dimensional deterministic system, then provided the embedding dimension is sufficiently large, the trajectory gives a genuine image of the dynamics in its own phase space.
We then reconstructed a two-dimensional phase space view of our data in this way, using the ersatz values of I(t) created using the Beer-Lambert law. The results appear in Fig. 4, with a choice of lag of about 1/4 of the main period present in the signal, which also is close to the first zero of the autocorrelation of I(t). The large D-shaped structure is reminiscent of the ellipse we would obtain if we used only the gait signal to create the two-dimensional trajectory, so we associate the D-shape with gait. The small loops seen at various places on the straight line part of the D we associate with heartbeats.
The appearance of this phase space reconstruction led us to think about ways to remove the larger D-shaped behaviour from the signal, then leaving behind mostly the effects of the blood pressure. Although we are motivated by a two-dimensional projection of the trajectory, we now consider a higher dimensional phase space, acknowledging that the dimension needs to be high enough to obtain a topologically equivalent trajectory. The results of the remarkably effective SVD method are detailed in the following sections.
Singular value decomposition
The set of all delay vectors forms an ellipsoidal cloud in the Euclidean space and we wish to establish and then remove the two most important directions associated with the data, as these will correspond to the D shape seen in Fig. 4. The procedure continues by subtracting off the mean value from each column vector creating a new set x
i
and then diagonalising the covariance matrix
$$\frac{1}{N}\sum_{i=1}^{N}x_{i}{x_{j}^{T}}. $$
This is related to the singular value decomposition of a trajectory matrix M which has rows consisting of the row vectors \({x_{i}^{T}}\). The covariance matrix is actually the product M
T
M and the right singular vectors are the eigenvectors of the covariance matrix. The matrix is real and symmetric, hence its eigenvalues are real and its eigenvectors are orthogonal. Mathematically, the singular values of M are the square roots of the eigenvalues of the covariance matrix.
Finding the singular vectors corresponds to finding the semi-major axes of the data cloud generated in the embedded phase space by the trajectory, and the singular values are then related to the lengths of these axes. Thus, the singular vectors give a geometric description of where the trajectory lies, and the singular values are a measure of the extent of the trajectory in the corresponding directions. The most relevant directions in space relate to the vectors corresponding to the largest eigenvalues; directions associated with small eigenvectors may be neglected.
The most relevant directions in space are given by the eigenvectors corresponding to the largest eigenvalues. The group thought that the SVD might then identify the two most important axes as those associated with the D-shape seen in two dimensions and allow us to remove those dimensions from the data.
The first step in using SVD is to create a matrix M, each row of which is the vector (I
k+1,I
k+2,…,I
k+n
), with the first row having k=0, the second k=n+1, the third k=2n+1, etc. We generated as many rows as the data allowed, say m rows. In the standard approach, each row is then averaged separately, and the average is subtracted from that row to create a normalised M matrix. This moves the origin in phase space to the middle of the ellipsoid. In fact, we found that we obtained better results if we did not normalise the matrix M.
The SVD decomposes the matrix M as
$$M_{\text{mn}}= U_{\text{mm}}S_{\text{mn}}V_{\text{nn}}^{T}, $$
where U
T
U=I, V
T
V=I; the columns of U are orthonormal eigenvectors of M
M
T; the columns of V are orthonormal eigenvectors of M
T
M; and S is a diagonal matrix containing the singular values of M in descending order.
In order to remove the D-shape seen in two dimensions, we set the two biggest singular values in S to zero. We then compute a new M matrix using the modified S matrix and the same matrices U and V as obtained from the original M matrix. This new M matrix then gives a time series that should represent or capture the notches in I(t).
Figure 5 shows the blood pressure data, the gait data, the composite signal I(t), the result (the residual) of removing just the most singular value from the time series by setting the first singular value in S to zero, and the result of also removing the second most singular value from I(t). Note that the remaining time series has a pulse of activity located exactly at each place that blood pressures rise to systole. Figure 6 shows the result of repeating this analysis when the gait has a period that is about 50 % longer than before. SVD seems to work just as well, in locating the places where blood pressure rises, that is, in finding heart rate.
Figure 7 shows more detail of just the blood pressure curve and the result of removing the two most singular values, otherwise the same case as in Fig. 5. Figure 8 shows a closeup view of the blood pressure signal over one period, superposed with the combined I(t) signal and the result of removing the two most singular values from that signal. Note the increased amplitude of the oscillations in this ‘after SVD’ data at the place where blood pressure rises suddenly to systole.
In fact, we discovered that it is possible to recover curves that look just like the original blood pressure data, by first rectifying and then smoothing the residual signal obtained after removing the two most singular values. As illustrated in Fig. 9, the resulting signal looks a lot like the original blood pressure data before modifying with the Beer-Lambert law. That is, buried in the apparently noisy SVD result is the original blood pressure data. Removing the two most singular values from the composite signal I(t), then rectifying and smoothing, gives a filtered signal that looks very like the desired blood pressure curve and will give correct heartrate values. More extensive testing is required to be sure of the general efficacy of this approach, but it looks very promising and is visually compelling. We found that it is not sensitive to the length of vectors n chosen to make the matrix M from the time series.