Contact sales

How to detect falls with Particle

Compact fall detection system using Particle Photon 2, ADXL362 accelerometer, and tactile push button.

Shebin Jacob Nekhil R article author avatarShebin Jacob Nekhil RNovember 22, 2024
How to detect falls with Particle

The problem

Falls among elderly individuals are a significant health concern, especially as global demographics change. By 2050, over 1.5 billion people worldwide will be aged 65 and older, with many living independently. While this independence is valuable, it also presents risks; studies indicate that approximately 28-35% of individuals over 65 experience a fall each year, and this figure rises to nearly 42% for those over 70. Falls often lead to serious injuries, such as hip fractures and traumatic brain injuries, contributing to a high number of emergency room visits and hospitalizations among seniors. Furthermore, the fear of falling again can severely impact their quality of life and mobility.

The first hour after a fall—the “golden hour”—is crucial for receiving timely care. Delays in assistance can increase the risks of severe injury, chronic disability, and even mortality.

The solution

To address this growing issue, we built a fall detection system utilizing sensor-based alerts to promptly notify caregivers and reduce emergency response times. Our project aims to create an affordable and highly accurate wearable fall detector by leveraging deep learning techniques alongside the Particle Photon 2 and the ADXL362 accelerometer. This innovative solution will help ensure that seniors receive immediate assistance when they need it most, ultimately improving their safety and quality of life.

Dataset

A quality dataset is essential for building an accurate fall detection system, especially for wearable devices targeting elderly users. Fall detection systems require data that includes a variety of real-world scenarios, such as different types of falls and daily non-fall activities, to ensure that the model can reliably distinguish between actual falls and ordinary movements. Gathering such a dataset, however, is highly challenging. It involves carefully orchestrated data collection sessions with volunteers simulating various falls (like forward, backward, and lateral falls) and common activities, which are recorded using specialized motion sensors. This process is not only labor-intensive but also requires high standards of safety and consistency, especially if the goal is to represent different types of falls with realistic intensity and context.

The SisFall dataset addresses this need comprehensively. It consists of 19 ADLs (Activities of Daily Living) and 15 types of Falls, with data collected from 38 volunteers—15 elderly (ages 60-75) and 23 young adults (ages 19-30). The details of different classes of Fall included in the dataset are shown below.

Code    Activity                                    Trials  Duration
-----------------------------------------------------------------------------
F01     Fall forward while walking caused by a slip     5   15s
F02     Fall backward while walking caused by a slip    5   15s
F03     Lateral fall while walking caused by a slip     5   15s
F04     Fall forward while walking caused by a trip     5   15s
F05     Fall forward while jogging caused by a trip     5   15s
F06     Vertical fall while walking caused by fainting  5   15s
F07     Fall while walking, with use of hands in a
        table to dampen fall, caused by fainting        5   15s
F08     Fall forward when trying to get up              5   15s
F09     Lateral fall when trying to get up              5   15s
F10     Fall forward when trying to sit down            5   15s
F11     Fall backward when trying to sit down           5   15s
F12     Lateral fall when trying to sit down            5   15s
F13     Fall forward while sitting, caused by fainting
        or falling asleep                               5   15s
F14     Fall backward while sitting, caused by fainting
        or falling asleep                               5   15s
F15     Lateral fall while sitting, caused by fainting
        or falling asleep                               5   15s

 

All ADL classes included in the dataset are shown in the table below.

Code    Activity                                        Trials  Duration
-----------------------------------------------------------------------------
D01     Walking slowly                                       1  100s
D02     Walking quickly                                      1  100s
D03     Jogging slowly                                       1  100s
D04     Jogging quickly                                      1  100s
D05     Walking upstairs and downstairs slowly               5  25s
D06     Walking upstairs and downstairs quickly              5  25s
D07     Slowly sit in a half height chair, wait a moment,
        and up slowly                                        5  12s
D08     Quickly sit in a half height chair, wait a moment,
        and up quickly                                       5  12s
D09     Slowly sit in a low height chair, wait a moment,
        and up slowly                                        5  12s
D10     Quickly sit in a low height chair, wait a moment,
        and up quickly                                       5  12s
D11     Sitting a moment, trying to get up, and collapse
        into a chair                                         5  12s
D12     Sitting a moment, lying slowly, wait a moment,
        and sit again                                        5  12s
D13     Sitting a moment, lying quickly, wait a moment,
        and sit again                                        5  12s
D14     Being on one's back change to lateral position,
        wait a moment, and change to one's back              5  12s
D15     Standing, slowly bending at knees, and getting up    5  12s
D16     Standing, slowly bending without bending knees,
        and getting up                                       5  12s
D17     Standing, get into a car, remain seated and
        get out of the car                                   5  25s
D18     Stumble while walking                                5  12s
D19     Gently jump without falling
        (trying to reach a high object)                      5  12s

 

This dataset consists of data collected from 38 volunteers divided into two groups: elderly people and young adults. The elderly people group was formed by 15 participants (8 male and 7 female), and the young adults group was formed by 23 participants (11 male and 12 female).

The below table shows the age, weight, and height of each group

Sex     Age     Height(m)   Weight(kg)
-----------------------------------------------------------------------------
Elderly     Female  62-75   1.50-1.69   50-72
            Male    60-71   1.63-1.71   56-102
-----------------------------------------------------------------------------
Adult       Female  19-30   1.49-1.69   42-63
            Male    19-30   1.65-1.83   58-81

 

Preparing the dataset

The dataset was captured using three sensors, including two accelerometers – ADXL345 accelerometer (configured for ±16g, 13 bits of analog to digital converter – ADC), a Freescale MMA8451Q accelerometer (±8g, 14 bits of ADC) and one gyroscope – ITG3200 gyroscope (±2000∘/s, 16 bits of ADC), at a sampling frequency of 200 Hz. The sample data of a fall is shown below

-9, -257, -25,  84,  247,  27, -120, -987,  63;
-3, -263, -23,  99,  258,  35, -110, -1016, 68;
-1, -270, -22,  114, 272,  45, -94,  -1037, 69;
1,  -277, -24,  127, 286,  57, -81,  -1062, 69;
2,  -281, -25,  134, 303,  70, -71,  -1079, 63;
11, -290, -24,  135, 322,  83, -51,  -1097, 59;
12, -296, -29,  134, 342,  96, -43,  -1114, 56;
13, -296, -29,  125, 364, 113, -33,  -1131, 57;
17, -300, -23,  119, 384, 130, -15,  -1143, 59;
16, -302, -21,  117, 408, 152, -1,   -1159, 65;

 

For this project, we utilize the raw acceleration data captured by the ADXL345 accelerometer, focusing on the first three columns of each data entry corresponding to the X, Y, and Z axes.

Now the accelerometer data (x, y, z) is converted to gravitational units using the following formula:

Acceleration [g]= [ (2*Range) / (2^Resolution) ]*raw_acceleration

 

We’re using the ADXL362 accelerometer for this project, which provides 12-bit resolution and can measure up to a ±8g range.

Uploading the dataset to Edge Impulse

To upload our accelerometer data to Edge Impulse for machine learning model training, we first need to prepare it in Edge Impulse’s data acquisition format. The dataset, which contains accelerometer readings, is categorized into two classes: ADL (Activities of Daily Living) and FALL. Each reading includes acceleration values in three axes, which we convert from raw sensor units to m/s² to standardize the measurements.

To ensure secure data transfer, we need the API and HMAC keys from our Edge Impulse project. These keys are accessible from the Dashboard > Keys tab in the Edge Impulse Studio, allowing us to generate signatures for data validation. Once the data is formatted and signed, it’s ready for upload to the Edge Impulse Studio, where it will be used to train our machine-learning model for fall detection.

Below is a Python script that facilitates this conversion and packaging, transforming the raw accelerometer data into the required JSON format compatible with Edge Impulse Studio’s data ingestion API.

import json
import hmac
import hashlib
import glob
import os
import time
import sys

# Define constants
HMAC_KEY = "<hmac_key>"  # Replace with your HMAC key for Edge Impulse
OUTPUT_FOLDER = "./data/"
CONVERT_G_TO_MS2 = 9.80665  # Conversion factor from G to m/s²
INTERVAL_MS = 20  # Set interval (50 Hz)

# HMAC signature placeholder
empty_signature = ''.join(['0'] * 64)

# Label mapping based on filename prefix
ADL_CODES = {f"D{i:02}" for i in range(1, 20)}
FALL_CODES = {f"F{i:02}" for i in range(1, 16)}

# Generate label from filename prefix
def get_label_from_filename(filename):
    prefix = filename.split('_')[0]
    if prefix in ADL_CODES:
        return "ADL"
    elif prefix in FALL_CODES:
        return "FALL"
    else:
        raise ValueError(f"Unknown prefix in filename: {filename}")

# Convert data to JSON format with HMAC signature
def create_json_data(filename, values):
    data = {
        "protected": {
            "ver": "v1",
            "alg": "HS256",
            "iat": int(time.time())
        },
        "signature": empty_signature,
        "payload": {
            "device_name": "aa:bb:cc:dd:ee:ff",
            "device_type": "generic",
            "interval_ms": INTERVAL_MS,
            "sensors": [
                {"name": "ax", "units": "m/s2"},
                {"name": "ay", "units": "m/s2"},
                {"name": "az", "units": "m/s2"}
            ],
            "values": values
        }
    }

    # Sign the message
    encoded = json.dumps(data)
    signature = hmac.new(HMAC_KEY.encode('utf-8'), msg=encoded.encode('utf-8'), digestmod=hashlib.sha256).hexdigest()
    data['signature'] = signature

    return data

# Main function to process files
def process_files(data_folder):
    files = glob.glob(os.path.join(data_folder, "*/*.txt"))

    for path in files:
        filename = os.path.basename(path)
        label = get_label_from_filename(filename)
        output_filename = os.path.join(OUTPUT_FOLDER, f"{label}.{os.path.splitext(filename)[0]}.json")

        values = []
        with open(path) as file:
            for i, line in enumerate(file):
                if line.strip() and i % 4 == 0:  # Sample down every 4th line
                    columns = line.strip().split(',')
                    # Extract only the first three values (ax, ay, az)
                    ax, ay, az = (float(columns[i]) for i in range(3))
                    ax = ax * ((2 * 16) / (2 ** 13)) * CONVERT_G_TO_MS2
                    ay = ay * ((2 * 16) / (2 ** 13)) * CONVERT_G_TO_MS2
                    az = az * ((2 * 16) / (2 ** 13)) * CONVERT_G_TO_MS2
                    values.append([ax, ay, az])

        if values:
            json_data = create_json_data(filename, values)
            with open(output_filename, 'w') as outfile:
                json.dump(json_data, outfile, indent=4)

if __name__ == "__main__":
    # Ensure the output folder exists
    os.makedirs(OUTPUT_FOLDER, exist_ok=True)

    # Check for data folder argument
    if len(sys.argv) < 2:
        print("Usage: python preprocess.py <input_data_folder>")
        sys.exit(1)
   
    # Get data folder from command-line argument
    data_folder = sys.argv[1]
   
    # Run the file processing
    process_files(data_folder)
    print("Data processing complete. JSON files saved in:", OUTPUT_FOLDER)

 

Once you have successfully downloaded the SisFall dataset, you can convert the raw data into the Edge Impulse Data Acquisition Format by running the following command:

python3 preprocess.py SisFall_Datase

 

Here, ‘SisFall_Dataset’ refers to the name of the input folder. After executing this command, an output folder named ‘data’ will be created, containing all the data files. Now upload the dataset to Edge Impulse from the Data Acquisition tab.

After you upload the dataset resample the data using resample.py using the following command

python3 resample.py

After resampling is done, perform a Train/Test split which will split the dataset into training and testing datasets in a ratio of 80/20.

Create Impulse

To build an ML model in Edge Impulse you should start by Creating An Impulse. This is the starting point where you define the entire pipeline for processing and analyzing your sensor data.

To create an impulse in Edge Impulse, follow these steps:

  1. Create A New Impulse
  • Navigate to the Impulse Design section.
  • Click on Create Impulse to start the process of setting up your impulse pipeline.
  1. Add Processing Block
  • Click on Add a processing block.

Select Raw Data from the list of available processing blocks.

  • The Raw Data block processes the raw sensor data without any pre-processing. It allows the deep learning model to learn features directly from the raw data, automatically identifying patterns and features relevant for classification. You can also use Spectral Analysis which is great for analyzing repetitive motion, such as data from accelerometers. Extracts the frequency and power characteristics of a signal over time. But as we are going for a deep-learning model we proceeded without any pre-processing.
  1. Add a Learning Block
  • Click on Add a Learning Block.

Choose Classification as the learning block.

  • The Classification block is responsible for learning from the features in the raw data and applying this knowledge to classify new, unseen data. It identifies patterns in the data that correspond to different classes (e.g., ADL, FALL).
  1. Configure Window Size and Increase

Set both the Window size and Window increase to 4000ms.

  • This means that the data will be divided into frames of 4000 milliseconds (4 seconds), and the frames will not overlap, resulting in distinct, independent windows of data for classification.
  • A 4000ms window size is chosen because it provides sufficient context for capturing meaningful data patterns, making it ideal for classification tasks.
  1. Save Impulse

After configuring the processing and learning blocks, click Save Impulse.

Feature generation

At this stage, we are ready to proceed to the Raw Data tab and begin the feature generation process. The Raw Data tab offers various options for manipulating the data, including adjusting axis scales and applying filters. However, for this project, we have opted to retain the default settings and move directly to generating the features.

To generate features, we will utilize a range of algorithms and techniques designed to identify key patterns and characteristics within the data. These extracted features will be employed by the learning block of our impulse to categorize the accelerometer data into one of two predefined classes. By carefully selecting and extracting relevant features, we aim to develop a more accurate and robust model for classifying accelerometer data.

Model training

Having extracted and prepared our features, we are now ready to proceed to the Classifier tab to begin training our model. The Classifier tab offers various options for fine-tuning the model’s behavior, such as adjusting the number of neurons in the hidden layers, setting the learning rate, and determining the number of training epochs.

But in our case, the default model is not enough, so Switch To Expert Mode which will give us the space to build our own deep learning model. For this project, we’re using Temporal CNN.

A Temporal Convolutional Neural Network (Temporal CNN or TCN) is a type of deep learning model that is designed to handle sequential data, such as time-series data, by applying convolutional layers in a way that captures temporal dependencies within the data. Unlike traditional convolutional neural networks (CNNs), which are typically used for image processing, Temporal CNNs are specifically optimized to deal with the sequential nature of time-series data, making them suitable for tasks like speech recognition, video processing, and sensor data classification.

How Temporal CNN Works:

  • Convolutional Layers: Temporal CNNs use 1D convolutional layers that slide over time-series data, applying filters to capture important patterns or features over time. These filters can capture local features like trends, spikes, or periodicity, which are essential for understanding time-based data.
  • Dilation: One key feature of Temporal CNNs is the use of dilated convolutions. These allow the model to capture long-range dependencies across time steps without requiring an excessively deep network. By skipping some time steps between the filters, dilated convolutions enable the model to process wider temporal contexts efficiently.
  • Residual Connections: Temporal CNNs often use residual connections, which help to prevent vanishing gradients and allow the network to learn deeper representations without losing important features from earlier layers.

Why Is It Useful

Accelerometer data, such as that provided by the SisFall dataset, is sequential by nature—it consists of measurements taken at regular time intervals. Falls are events that occur suddenly and exhibit distinct patterns in accelerometer data, such as rapid changes in acceleration, sudden peaks, or sharp changes in orientation. These features are crucial for fall detection systems, but identifying them requires analyzing temporal dependencies and patterns across multiple time steps.

Temporal CNNs are well-suited for Fall Detection because:

  • Capturing Temporal Patterns: Falls are typically characterized by rapid and abrupt changes in acceleration. Temporal CNNs excel at identifying these temporal patterns, such as spikes or trends in accelerometer signals, over time. The convolutional filters capture these temporal dependencies effectively.
  • Scalability to Large Datasets: The SisFall dataset, which contains a large collection of labeled accelerometer data for both activities of daily living (ADL) and fall events, is large and highly varied. Temporal CNNs are well-suited to handle large datasets because of their ability to extract hierarchical features at different temporal scales. By processing large amounts of data efficiently, they can learn to distinguish between subtle differences in data across many classes (like different types of movements or falls).
  • Efficiency with Long Sequences: Temporal CNNs are particularly good at processing long sequences of data. With datasets like SisFall, which may contain long time-series data from accelerometers, Temporal CNNs can capture long-range dependencies between events, like the buildup to a fall or the post-fall behavior, without needing excessively deep models.
  • Effective Generalization: Temporal CNNs can generalize well to unseen fall events because they focus on learning robust features from the raw data. This is critical in fall detection, as real-world fall events can vary widely in terms of intensity, direction, and body orientation.
  • Dimensionality Reduction: The model can learn to focus on the most relevant features by learning temporal dependencies. This reduces the need for extensive manual feature engineering, which is especially useful when dealing with large datasets like SisFall.

Our TCN model summary is as follows:

_________________________________________________________________
Layer (type)                Output Shape              Param #  
=================================================================
input_1 (InputLayer)        [(None, 600)]             0        

reshape (Reshape)           (None, 200, 3)            0        

normalization (Normalizatio  (None, 200, 3)           0        
n)                                                             

conv1d (Conv1D)             (None, 200, 64)           640      

dropout (Dropout)           (None, 200, 64)           0        

conv1d_1 (Conv1D)           (None, 200, 64)           12352    

dropout_1 (Dropout)         (None, 200, 64)           0        

conv1d_2 (Conv1D)           (None, 200, 64)           12352    

dropout_2 (Dropout)         (None, 200, 64)           0        

conv1d_3 (Conv1D)           (None, 200, 64)           12352    

dropout_3 (Dropout)         (None, 200, 64)           0        

global_average_pooling1d (G  (None, 64)               0        
lobalAveragePooling1D)                                         

dense (Dense)               (None, 32)                2080     

dropout_4 (Dropout)         (None, 32)                0        

dense_1 (Dense)             (None, 2)                 66       

=================================================================
Total params: 39,842
Trainable params: 39,842
Non-trainable params: 0

 

The model processes the input data through several layers, including:

  • Reshaping and Normalization: The input data is reshaped, and a normalization layer is applied with predefined mean and variance values for the accelerometer data, ensuring the data is standardized before being passed through the network.
  • Temporal CNN Block: Multiple 1D convolutional layers with increasing dilation rates (1, 2, 4, 8) capture temporal dependencies in the sequential data. These layers use ReLU activations and dropout for regularization.
  • Global Average Pooling: After processing through convolutional layers, global average pooling reduces the temporal dimension, retaining only the most important features.
  • Fully Connected MLP Layers: The pooled features are passed through a multi-layer perceptron (MLP) with ReLU activations and dropout for further processing.
  • Output Layer: The final output layer uses a softmax activation to classify the data into predefined categories (such as FALL or ADL).

The model is trained with a learning rate of 0.0005, a batch size of 32, and 20 epochs using the Adam optimizer. The training process incorporates callbacks for monitoring progress. The whole model training code is given here

import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Input, Conv1D, Dropout, GlobalAveragePooling1D, LayerNormalization, Normalization, Reshape
from tensorflow.keras.optimizers import Adam

EPOCHS = 20
LEARNING_RATE = 0.0005
BATCH_SIZE = 32

# TCN block
def temporal_cnn_block(inputs, filters, kernel_size, dilation_rate, dropout=0):
    x = Conv1D(filters=filters, kernel_size=kernel_size, padding="causal", dilation_rate=dilation_rate, activation="relu")(inputs)
    x = Dropout(dropout)(x)
    return x

def build_model(
    input_shape,
    filters,
    kernel_size,
    dilation_rates,
    mlp_units,
    dropout=0,
    mlp_dropout=0,
):
    inputs = Input(shape=input_shape)
    x = Reshape([int(input_length/3), 3])(inputs)
   
    # Normalization layer
    x = Normalization(axis=-1, mean=[-0.047443, -6.846333, -1.057524], variance=[16.179484,  33.019396,  22.892909])(x)
   
    # Temporal CNN layers with increasing dilation rates
    for dilation_rate in dilation_rates:
        x = temporal_cnn_block(x, filters=filters, kernel_size=kernel_size, dilation_rate=dilation_rate, dropout=dropout)
   
    x = GlobalAveragePooling1D()(x)
    for dim in mlp_units:
        x = Dense(dim, activation="relu")(x)
        x = Dropout(mlp_dropout)(x)
    outputs = Dense(classes, activation="softmax")(x)
    return Model(inputs, outputs)

input_shape = (input_length, )

model = build_model(
    input_shape,
    filters=64,
    kernel_size=3,
    dilation_rates=[1, 2, 4, 8],
    mlp_units=[32],
    mlp_dropout=0.40,
    dropout=0.25,
)

# Optimizer and compilation
opt = Adam(learning_rate=LEARNING_RATE, beta_1=0.9, beta_2=0.999)

# Add any callbacks you might have
callbacks = []
callbacks.append(BatchLoggerCallback(BATCH_SIZE, train_sample_count, epochs=EPOCHS))

# Train the neural network
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])

model.summary()

train_dataset = train_dataset.batch(BATCH_SIZE, drop_remainder=False)
validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=False)

model.fit(train_dataset, epochs=EPOCHS, validation_data=validation_dataset, verbose=2, callbacks=callbacks)

After training the model, we achieved an impressive accuracy of 98.9%, which is particularly remarkable considering the size of the dataset.

Model testing

Once we completed training and fine-tuning our model, we tested its performance on unseen data using the Model Testing tab and the Classify All feature. This step was critical for evaluating the model’s ability to accurately detect falls in scenarios it hadn’t encountered before.

The model demonstrated strong performance, achieving high classification accuracy on the test data. These results indicate that the model is both reliable and well-suited for real-world applications, providing confidence in its ability to detect falls effectively.

Deployment

On the Deployment page, select the “Create Library” option and choose “Particle Library” which will create a particle library.

Setting up Particle Photon 2

  1. Connecting the Photon to Your Computer: Start by connecting the Photon device to your computer using the included micro USB cable. Confirm that the device powers up, indicated by the LED light turning on.
  2. Create and Access Your Particle Account: If you do not already have a Particle account, visit the Particle website to sign up. After registering, log in to your account to proceed with the device setup and manage your devices.
  3. Setup Photon2
  • Open your browser and go to setup.particle.io to begin configuring your Photon device. The setup wizard will guide you through linking the device to your Particle account.

Setting up Particle Workbench

If you want to develop the firmware and flash it locally without using Particle Web IDE, you can follow this guide to set up Particle Workbench in VS Code.

Integrating Twilio for SMS alerts

Integrating Twilio with a Particle Photon 2 device allows you to send SMS alerts directly from the Photon 2, making it ideal for applications where immediate notifications are crucial, such as in fall detection systems. Follow these steps to set up and integrate Twilio with your Photon 2 for sending SMS alerts.

Set up a Twilio account and get your credentials

  • If you don’t already have one, sign up for a Twilio account at Twilio’s website.
  • Once your account is set up, obtain your Account SID and Auth Token from the Twilio Console, as well as a Twilio phone number. You’ll need these to authorize and send SMS messages through Twilio.

Set up a Twilio integration in the Particle Console

  • Log in to the Particle Console and navigate to the Integrations section.
  • Click on New Integration and select Twilio.
  • Event Name: Choose an event name, like twilio_sms_alert, which Photon 2 will trigger when it needs to send an SMS.
  • Parameters: Set up parameters fields as follows:
  • Username: Your Twilio Account SID.
  • Password: Your Twilio Auth Token.
  • Twilio SID: Your Twilio Account SID.
  • Form Data: Set up form data fields as follows:
  • From: Your Twilio phone number.
  • To: The recipient’s phone number (the one to which SMS alerts will be sent).
  • Body: The message text, which can include dynamic values such as {{PARTICLE_EVENT_VALUE}} if you want to customize the message from the Photon 2 code.

Compile And upload the Edge Impulse model to Photon 2

If you choose to deploy your project to a Particle Library and not a binary follow these steps to flash your firmware from Particle Workbench:

  • Open a new VS Code window, ensure that Particle Workbench has been installed
  • Use VS Code Command Palette and type in Particle: Import Project
  • Select the project.properties file in the directory that you downloaded from Edge Impulse.
  • Use VS Code Command Palette and type in Particle: Configure Project for Device
  • Select deviceOS@5.9.0
  • Choose a target. (e.g. P2, this option is also used for the Photon 2).
  • Compile and Flash in one command with Particle: Flash application & DeviceOS (local)

Hardware

The core of this project is the Particle Photon 2 microcontroller, a lightweight and powerful device ideal for real-time fall detection. It supports 2.4 GHz and 5 GHz Wi-Fi, ensuring reliable connectivity in various network environments. Powered by a Realtek RTL8721DM processor with an ARM Cortex M33 CPU running at 200 MHz, the Photon 2 provides the processing power needed for complex, high-speed applications. Its compact form factor and IoT compatibility make it easy to integrate into wearable devices.

Its onboard RGB LED is utilized in this project to visually indicate device status. The RGB LED changes color to signal different states, such as alerting mode during a detected fall or normal operation during routine monitoring. This built-in LED provides a quick, at-a-glance way to understand the device’s status without needing additional display components, enhancing user awareness in a compact and efficient manner.

The ADXL362 3-axis accelerometer is utilized in this project to capture movement data. This high-performance sensor detects sudden accelerations along the x, y, and z axes, essential for recognizing falls. Its ultra-low power consumption drawing just 1.8 µA in measurement mode and 300 nA in standby is a significant advantage, as it ensures minimal battery drain, making it ideal for wearable and battery-powered applications.

Momentary Tactile Push Button Module is used in this project as a button for user input.  This component allows users to interact directly with the system, providing a simple way to control the device without additional software interfaces.

Breadboard prototype

We started by connecting all the components to a breadboard to test the project. This setup allowed us to verify the functionality of the accelerometer, Photon 2, push button, and battery before final assembly. Testing on the breadboard ensured all connections and interactions worked as expected, making troubleshooting easier and streamlining the next steps.

CAD 

We designed the enclosure for the fall detection system in Fusion 360, focusing on a compact, wearable watch-style configuration. This design ensures the device fits comfortably on the wrist, enabling accurate motion monitoring for improved fall detection.

The enclosure includes an upper section and a lower section, which securely hold the internal components, such as the Photon 2, accelerometer, and battery. We fastened these sections together using M3 x 10 mm screws to create a durable assembly.

We also designed a small button case for easy access to the push button, ensuring seamless user interaction. Our design prioritizes both functionality and comfort for a practical, wearable solution.

Assembly

We began the assembly process by attaching the ADXL362 accelerometer to the back of the Photon 2, carefully soldering the wires to ensure secure and reliable connections. Precision was key to avoid damaging the delicate components.

To power the device, we selected a 400 mAh LiPo battery with compatible connectors. The battery was placed directly above the accelerometer on the Photon 2, ensuring a compact and efficient layout.

Next, we secured the push button module, which allows the user to cancel false alarms. We carefully soldered its connections, ensuring it was properly integrated into the system.

Once all wiring and soldering were complete, we secured the upper and lower sections of the device using M3 screws. This created a sturdy housing for the components. Finally, we positioned the push button case in place, ensuring it was accessible for easy use.

With the assembly complete and straps attached, our fall detection device was ready for testing and deployment!

Deployment

The fall detector system continuously monitors the accelerometer to identify potential falls. When a fall is detected, the onboard RGB LED changes color to notify the wearer. If it’s a false detection, the user can press a push button to cancel the alert, signaling that they are safe. However, if the button is not pressed within 5 seconds, the system interprets the event as a legitimate fall and automatically sends an alert to a designated contact, ensuring timely assistance if needed.

Conclusion

This fall detection wearable redefines independence for seniors, combining practicality with powerful, precise technology to create a safety net where it matters most. By seamlessly integrating the ADXL362 accelerometer with the Particle Photon 2 and leveraging advanced algorithms, this device provides rapid, reliable detection while reducing false alarms through an intuitive push-button feature.

Beyond simple monitoring, this wearable ensures seniors are connected to timely assistance in the critical minutes following a fall when immediate intervention can be life-saving. For caregivers and loved ones, this device offers much-needed peace of mind, fostering a sense of security without compromising the wearer’s independence. This fall detector wearable is more than a device; it’s a step toward safer, more dignified aging, underscoring the vital role of technology in supporting independence and resilience in the years to come.

And that’s a wrap! By combining the Particle Photon 2, ADXL362 accelerometer, and a bit of deep learning magic, we’ve created a smart, affordable fall detection system. It’s designed to keep seniors safe by detecting falls in real time and notifying caregivers instantly. This project shows how tech can make a real difference in addressing everyday challenges, like ensuring peace of mind for seniors and their families. Thanks for following along, and happy building!

Comments are not currently available for this post.