Remote gait monitoring system to facilitate assessment of people with multiple sclerosis

We are proud to introduce our new paper: «Remote Gait Monitoring System to Facilitate Assessment of People with Multiple Sclerosis». Read on its own terms, the paper’s contribution is to turn everyday human movement into a dependable clinical signal by engineering the whole stack, from sensors on the foot, to semantics in the cloud, to classifiers that speak the language of disability. It starts with an Internet-of-Wearable-Things setup built around smart socks that combine inertial and plantar-pressure sensing, capturing long, naturalistic traces instead of clinic-only snapshots. That choice is not cosmetic, because it raises the ecological validity of what “gait” means in practice and lets the system see fatigue, asymmetry, and irregularity as they actually unfold both at home and in the street.

What lifts the work beyond another device study is the way raw motion becomes structured evidence. The pipeline first tames the physics, synchronizing streams, correcting orientation with established sensor-fusion filters, and fusing IMU with GPS to control drift, so that position and cadence estimates are trustworthy over hours, not just minutes. Only then does it cut the signal into meaning: fixed windows slide across the day, short-time Fourier transforms expose where energy concentrates, and a deliberately narrow 0–5 Hz band acts as a lens on the quasi-periodic mechanics of walking. In this time–frequency view, “walking” stops being a counter of steps and becomes a spectral fingerprint whose base frequency and harmonic sharpness reflect speed, symmetry, and regularity.

On top of that scaffold the authors layer two complementary recognizers. One is explicitly interpretable: a PSD/spectrogram-based detector that flags walking and grades its quality from the stability and shift of spectral peaks, which then correlates with disability strata. The other is a CNN-LSTM with attention that learns spatiotemporal patterns over multi-sensor channels and uses its attention map to show which parts of the cycle drove the decision, an important gesture toward clinician trust when models move from lab to clinic. In a preliminary cohort monitored in free-living conditions, the deep model separates gait from non-gait with about 97% accuracy and low variance, while the spectral score achieves an AUC near 0.99 for identifying severe disability; these are early but striking signals that gait, treated spectrally and semantically, can carry diagnostic weight.

Our new IEEE Xplore paper, while demonstrated in health, lays out a reusable blueprint for capturing and integrating human-centered signals into digital asset models. Within DIGEST WP3, we are adapting that blueprint to industrial settings, so process operators and machines are treated as a coupled system, allowing us to describe and exploit variability instead of fighting it. This strengthens context-aware prediction and prescriptive maintenance across assets and shifts decision-making closer to real time.

The paper is framed in health, but its real center of gravity is methodological: it shows how to turn messy, human-centered signals into dependable, decision-ready intelligence. It treats a person not just as a data source but as a dynamic part of the system whose physiology, context, and routine modulate outcomes. That stance makes the work portable to industry: if you substitute “patient” for “operator” and “treatment protocol” for “work procedure,” you get a carefully engineered path for bringing human variability into a model of assets and processes, so decisions can be timed to when they matter and tailored to the state the system is actually in, not the state we imagine it to be.

Read that way, it slots naturally into DIGEST’s WP3, which is precisely about bringing the “human in the loop” into our digital asset fabric. In production systems, the operator isn’t an afterthought; they are a context source whose physiology, attention, routines, and micro-behaviors modulate how an asset performs under nominally identical conditions. The paper’s pipeline. sensorized data collection, quality control, feature extraction, and model conditioning, translates to WP3’s targets for context enrichment via wearables and mobile capture, as well as secure, interoperable integration into upstream services. That is exactly what WP3 formalizes through its deliverables on wearables/digital solutions for predictive-model context and on integration/interoperability and contactless communication for stakeholders with uneven digital maturity, and it’s the substrate we need before process/asset models can be enriched with behavioral facets and used to explain (and anticipate) variability on the shop floor. In other words, you extend the notion of “asset” from a static equipment twin to a coupled human–machine system, and variability stops being noise and becomes modelled information that improves APM strategies and prescriptive maintenance.

See the WP3 overview and charter language for the behavioral focus and the D3.1/D3.2 scope, plus the internal workplan and interim reports where APM modelling and data models of assets are being developed in tandem with WP3.