Closeup of a boxer's hand punching a bag, with illustrated wireless waves

BLOG

How To Train Your Wearable Device

WRITTEN BY KEVIN LOCKWOOD, FORMER CHIEF TECHNOLOGY OFFICER AT MISTYWEST

Leveraging ML Components to Unlock Powerful Low Power Applications

Motion Sensors, or Inertial Measurement Units (IMUs), in wearable connected devices have long been able to internally detect limited human interactions (such as tapping) and in some cases, very specific orientations (such as a wrist tilt) while using low power consumption. Detecting complex motions, like the orientation or travel of a wearable in sports applications (such as tennis, golf, or boxing) requires a lot of data analysis by a microcontroller. Not only will this microcontroller consume a lot of power, but it will use significant developer effort to write the software or firmware algorithms associated with these motions associated with these motions.

By leveraging ST’s Machine Learning Core Motion Sensor, we can sense complex motions with less development effort, ultimately leading to a working IoT product in a shorter time frame. This article will provide an overview of ST’s new offering and provide a guide in setting up the device, collecting the data, training the data, and feeding the data back into the IoT device to enable gesture recognition.

 

Meet the ST Machine Learning Core Motion Sensor

In 2019, STMicroelectronics introduced a Machine Learning Core Motion Sensor (IMU)–the LSM6DSOX–one of the first sensor chips to integrate a Machine Learning core to help relieve the first stage of activity tracking from a microcontroller (MCU). This saves energy and accelerates motion-based apps such as fitness logging, wellness monitoring, personal navigation, and fall detection.

The Machine Learning Core of the LSM6DSOX is a programmable (or trainable) decision tree which allows the IoT device to classify and detect specific motion patterns based on training data, and generates results in the dedicated output registers. This particular chip can be configured to run up to 8 decision trees simultaneously and independently.

A decision tree is a mathematical tool composed of a series of configurable nodes, each characterized by an “if-then-else” condition (a binary one), where an input signal from the sensor data is evaluated against a variety of parameters and their thresholds. The use of the decision tree drastically reduces code complexity, data analysis, and power requirements from a microprocessor for motion-based intelligent connected wearable devices.

 

 

The Machine Learning processing capability of the LSM6DSOX allows consistent low power consumption from moving some algorithms from the application processor to the sensor – specifically algorithms that follow an inductive approach to searching for patterns, and some standard examples include fitness activity recognition, motion intensity detection, and carrying position recognition.

Features and Filters of the LSM6DSOX can be applied to the raw sensor Inputs. The Filtered/Featured output is then fed to the Machine Learning Code decision trees stored on the device (generated from training data you acquire through usage of the wearable).

 

 

Training a Wearable with ST’s Machine Learning Core

As with all Machine Learning applications, this requires data-sets, or in other words: training the sensor. Let’s put on Eye of The Tiger, dust off our gloves and jump into the boxing ring to whip this sensor into shape.

Different types of punches

In this use case, we are capturing and data logging repetitions of boxing specific movements to be recognized by the sensor, which can include recognizing a straight punch, a jab, or an uppercut.

 

Overview

The high level workflow for configuring the machine learning core is as follows:

  1. Log gesture data via Bluetooth or Unico GUI
  2. Label the gesture data
  3. Configure the machine learning core with the labeled gestures
  4. Generate the decision tree and load the configuration on to the LSM6DSOX
  5. See the live decision tree outputs (based on recognized gestures) in Unico GUI

In this example, data can either be collected through Bluetooth or the Unico GUI. Once the gestures have been labelled and recorded, we will use Weka to load the data patterns and create the machine learning model and decision tree(s). Finally, the model will be loaded back on to the device through the Unico GUI for testing and verification.

 

A few common software tools available to evaluate data logs and generate the motion decision trees:

  • Unico GUI – ST’s officially supported Software for Data Log analysis, Decision Tree Generations and Sensor Configuration files. It is also capable of support all the MEMS sensors and sensor demonstration boards available in the STMicroelectronics portfolio.
  • Weka – A free software developed at the University of Waikato, New Zealand. It contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to these functions. Weka is one of the most popular machine learning tools for decision tree generation.
  • RapidMiner – A data science software platform for business and commercial applications, providing an integrated environment for data preparation, machine learning, deep learning, text mining, and predictive analytics. It supports all steps of the machine learning process, from data preparation to optimization.
  • MatLab – Decision trees for the Machine Learning Core can be generated with Matlab.
  • Python – Decision trees for the Machine Learning Core can be generated with Python through the “scikit” package. Python scripts are available at Python both as a Jupyter notebook (*.ipynb) and as a common Python script (*.py).

 

Sensor Configuration

The sensor configuration for this boxing use case runs at 104 Hz. The input from the accelerometer and gyroscope is used with six different features (mean, variance, peak-to-peak, min, max, zero-crossing) computed in a window of 208 samples. Left and Right are easily recognized with pre-identification or rotation direction. The current consumption of the LSM6DSOX is around 563 μA at 1.8 V.

 

 

Turning off the Machine Learning Core, the current consumption of the LSM6DSOX (with accelerometer and gyroscope at 104 Hz) would be around 550 μA, so 13 μA is the additional current consumption of the Machine Learning Core for this algorithm.

Low power microcontrollers consume between 2.5-4mA while processing. The features that the Machine Learning Core enables would cost 4-5x more current consumption if it were done by a standard microcontroller via firmware. Thus, the LSM6DSOX is a great way to gain a stepwise addition in functionality for relatively low current consumption.

 

Recording the Data Patterns

Since lower power consumption is one of the huge benefits of Sensor Machine Learning Technology, it is only natural to pair it with a BLE SoC – one of most ubiquitous and low power forms of wireless communication. If BLE is not available, the Unico GUI can also be used to log the LSM6DSOX data.

In a fitness wearable application such as this one, configuration can be easily passed over BLE, and once configured, the sensor will recognize motion patterns–like straight punches, jabs and uppercutsfrom the decision trees.

  1. Connect the Wearable to the Bluetooth LE Central Desktop application.
  2. Setup IMU data streaming and data rate (up to 833Hz). IMU live data will stream and data log on the desktop.
  3. When ready, have the user perform the movement you wish to capture/train the device for.
  4. Repeat the motion/capture. The more motion captures/data logs per motion, the better, as data logs are acquired by the device repeating specific motions

Once the data pattern has been recorded, we can load it into Unico to label the data (shown below), which will be used to configure the machine learning core.

Labeling the recorded data pattern (Source: STMicroelectronics)

Configuring the Machine Learning Core

Once we have the gesture data patterns recorded and labeled, we can configure the Machine Learning Core and generate the decision tree. The steps are as follows:

  1. Open Machine Learning Core Tool in Unico
  2. Load and label the data patterns
  3. Set configurations (i.e. inputs, ODRs, full scales, window length, filters, etc.)
  4. Generate ARFF file
  5. Launch Weka (machine learning tool)
    1. Load the ARFF file
    2. Select attributes
    3. Generate a decision tree
    4. Save the decision tree in a text file
  6. Load the decision tree in the Machine Learning Core Tool of Unico
  7. Configure decision tree output values and meta classifiers
  8. Save the register configuration for LSM6DSOX (.ucf file)
Configuring the Machine Learning Core. (Source: STMicroelectronics)
Reading the decision tree output from Weka. (Source: STMicroelectronics)

Recognizing Live Data

Once we have the decision tree loaded on the LSM6DSOX, we are ready to recognize live data! In the Unico GUI, the data tab will show the live streaming data as well as decision tree results. When a gesture is executed, the decision tree result should output change to reflect that a gesture has been recognized.

Data tab showing the decision tree results on a live data stream. (Source: STMicroelectronics)

Conclusion

The output of this application would allow a user to count their punches or recognize technique changes during a training session, which is valuable for anybody who isn’t just training their wearable, but training their body as well.

Kevin Lockwood practicing his punches
Kevin Lockwood practicing his punches

In this post, we demonstrated how a low powered wearable can be created and trained to individual users habits and capabilities by leveraging the LSM6DSOX. While this boxing use case applies to a wearable device like a wristband, it could also apply to an integrated device that is built into boxing gloves or wraps, or another use-case specific integration.

Please wait...
Scroll to Top