• Tutorials >
  • Tutorial for UWB Human Activity Recognition
Shortcuts

Tutorial for UWB Human Activity Recognition

import torch
import torch.nn as nn
import os

from pysensing.uwb.datasets.get_dataloader import *
from pysensing.uwb.models.get_model import *
from pysensing.uwb.training.har import *
from pysensing.uwb.inference.predict import *
from pysensing.uwb.inference.embedding import *

Download Data from Cloud Storage

Open the following link in your browser to download HAR datasets:

[Download Sleep_Pose_Net Dataset](https://pysensing.oss-ap-southeast-1.aliyuncs.com/data/uwb/sleep_pose_net_data.zip) […]()

Unzip the downloaded file and move to your data folder. For HAR, the data folder should look like this: ` |---data |------|---HAR |------|------|---sleep_pose_net_data |------|------|------|---Dataset I |------|------|------|---Dataset II `

Load the data

Human action recognition dataset:

Sleep Pose Net Dataset UWB size : n x 160 x 100 x_diff and x_wrtft size is depended on preprocessing parameters

Dataset 1 - number of classes : 6 - train number : 623 - test number : 307

Dataset 2 - number of classes : 7 - train number : 739 - test number : 365

Dataset name choices are: - ‘Sleepposenet_dataset1’ - ‘Sleepposenet_dataset2_session1_ceiling’ - ‘Sleepposenet_dataset2_session1_wall’ - ‘Sleepposenet_dataset2_session1_all’ - ‘Sleepposenet_dataset2_session2_ceiling’ - ‘Sleepposenet_dataset2_session2_wall’ - ‘Sleepposenet_dataset2_session2_all’ - ‘Sleepposenet_dataset2_sessionALL_ceiling’ - ‘Sleepposenet_dataset2_sessionALL_wall’ - ‘Sleepposenet_dataset2_sessionALL_all’

root = './data'
train_loader, test_loader = load_har_dataset(root, 'Sleepposenet_dataset2_session1_all')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for data in train_loader:
    x_diff, x_wrtft, labels = data
    print(x_diff.size())
    print(x_wrtft.size())
    print(labels.size())
    break
Loading Sleep Pose Net Dataset 2 session 1 all ...

  0%|          | 0/739 [00:00<?, ?it/s]
  3%|▎         | 25/739 [00:00<00:02, 249.54it/s]
  7%|▋         | 54/739 [00:00<00:02, 272.24it/s]
 11%|█▏        | 84/739 [00:00<00:02, 281.23it/s]
 15%|█▌        | 113/739 [00:00<00:02, 283.29it/s]
 19%|█▉        | 143/739 [00:00<00:02, 285.82it/s]
 23%|██▎       | 173/739 [00:00<00:01, 289.12it/s]
 27%|██▋       | 202/739 [00:00<00:01, 288.60it/s]
 31%|███▏      | 232/739 [00:00<00:01, 289.83it/s]
 35%|███▌      | 261/739 [00:00<00:01, 289.41it/s]
 40%|███▉      | 292/739 [00:01<00:01, 294.57it/s]
 44%|████▎     | 322/739 [00:01<00:01, 292.75it/s]
 48%|████▊     | 352/739 [00:01<00:01, 292.77it/s]
 52%|█████▏    | 384/739 [00:01<00:01, 297.94it/s]
 56%|█████▌    | 414/739 [00:01<00:01, 295.48it/s]
 60%|██████    | 445/739 [00:01<00:00, 297.13it/s]
 64%|██████▍   | 475/739 [00:01<00:00, 296.80it/s]
 68%|██████▊   | 505/739 [00:01<00:00, 295.17it/s]
 73%|███████▎  | 536/739 [00:01<00:00, 297.13it/s]
 77%|███████▋  | 566/739 [00:01<00:00, 296.10it/s]
 81%|████████  | 596/739 [00:02<00:00, 295.47it/s]
 85%|████████▍ | 627/739 [00:02<00:00, 297.70it/s]
 89%|████████▉ | 657/739 [00:02<00:00, 297.34it/s]
 93%|█████████▎| 688/739 [00:02<00:00, 298.14it/s]
 97%|█████████▋| 718/739 [00:02<00:00, 296.20it/s]
100%|██████████| 739/739 [00:02<00:00, 292.70it/s]

  0%|          | 0/365 [00:00<?, ?it/s]
  8%|▊         | 30/365 [00:00<00:01, 291.96it/s]
 16%|█▋        | 60/365 [00:00<00:01, 291.24it/s]
 25%|██▍       | 90/365 [00:00<00:00, 293.47it/s]
 33%|███▎      | 120/365 [00:00<00:00, 289.77it/s]
 41%|████      | 150/365 [00:00<00:00, 292.95it/s]
 49%|████▉     | 180/365 [00:00<00:00, 293.82it/s]
 58%|█████▊    | 210/365 [00:00<00:00, 294.96it/s]
 66%|██████▌   | 240/365 [00:00<00:00, 292.09it/s]
 74%|███████▍  | 270/365 [00:00<00:00, 292.76it/s]
 82%|████████▏ | 301/365 [00:01<00:00, 295.01it/s]
 91%|█████████ | 332/365 [00:01<00:00, 296.76it/s]
100%|█████████▉| 364/365 [00:01<00:00, 300.99it/s]
100%|██████████| 365/365 [00:01<00:00, 295.47it/s]
torch.Size([32, 1, 40, 99])
torch.Size([32, 1, 13, 38])
torch.Size([32])

Load the model

Model zoo: Sleep Pose Net model

model = load_har_model(dataset_name = 'sleep_pose_net_dataset2', model_name = 'sleepposenet')
print(model)
sleepposenet(
  (TD_conv): Sequential(
    (0): Conv2d(1, 16, kernel_size=(2, 3), stride=(1, 1))
    (1): ReLU()
    (2): Dropout2d(p=0.3, inplace=False)
    (3): MaxPool2d(kernel_size=(2, 3), stride=(2, 3), padding=0, dilation=1, ceil_mode=False)
    (4): Conv2d(16, 32, kernel_size=(2, 3), stride=(1, 1))
    (5): ReLU()
    (6): Dropout2d(p=0.3, inplace=False)
    (7): MaxPool2d(kernel_size=(2, 3), stride=(2, 3), padding=0, dilation=1, ceil_mode=False)
    (8): Conv2d(32, 32, kernel_size=(2, 3), stride=(1, 1))
    (9): ReLU()
    (10): Dropout2d(p=0.3, inplace=False)
    (11): MaxPool2d(kernel_size=(2, 3), stride=(2, 3), padding=0, dilation=1, ceil_mode=False)
  )
  (TD_fc): Linear(in_features=256, out_features=128, bias=True)
  (WRTFT_conv): Sequential(
    (0): Conv2d(1, 10, kernel_size=(2, 2), stride=(1, 1))
    (1): ReLU()
    (2): Dropout2d(p=0.3, inplace=False)
    (3): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
    (4): Conv2d(10, 20, kernel_size=(2, 2), stride=(1, 1))
    (5): ReLU()
    (6): Dropout2d(p=0.3, inplace=False)
    (7): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
  )
  (WRTFT_fc): Linear(in_features=320, out_features=128, bias=True)
  (fc): Sequential(
    (0): Linear(in_features=256, out_features=10, bias=True)
    (1): ReLU()
    (2): Dropout(p=0.2, inplace=False)
    (3): Linear(in_features=10, out_features=7, bias=True)
  )
)

Model train

criterion = nn.CrossEntropyLoss()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

sleepposenet_training(
    root= root,
    dataset_name='Sleepposenet_dataset2_session1_all',
    datasetname_model='sleep_pose_net_dataset2',
    model_name='sleepposenet',
    num_epochs=5,
    learning_rate=0.001,
    save_weights=True
)
Loading Sleep Pose Net Dataset 2 session 1 all ...

  0%|          | 0/739 [00:00<?, ?it/s]
  4%|▍         | 30/739 [00:00<00:02, 295.93it/s]
  8%|▊         | 60/739 [00:00<00:02, 295.41it/s]
 12%|█▏        | 91/739 [00:00<00:02, 297.53it/s]
 17%|█▋        | 122/739 [00:00<00:02, 301.59it/s]
 21%|██        | 153/739 [00:00<00:01, 301.48it/s]
 25%|██▍       | 184/739 [00:00<00:01, 300.78it/s]
 29%|██▉       | 215/739 [00:00<00:01, 299.98it/s]
 33%|███▎      | 246/739 [00:00<00:01, 299.50it/s]
 38%|███▊      | 278/739 [00:00<00:01, 303.50it/s]
 42%|████▏     | 309/739 [00:01<00:01, 301.16it/s]
 46%|████▌     | 340/739 [00:01<00:01, 299.01it/s]
 50%|█████     | 371/739 [00:01<00:01, 299.72it/s]
 54%|█████▍    | 401/739 [00:01<00:01, 298.45it/s]
 58%|█████▊    | 431/739 [00:01<00:01, 296.77it/s]
 62%|██████▏   | 461/739 [00:01<00:00, 296.85it/s]
 67%|██████▋   | 492/739 [00:01<00:00, 299.02it/s]
 71%|███████   | 522/739 [00:01<00:00, 299.12it/s]
 75%|███████▍  | 553/739 [00:01<00:00, 301.82it/s]
 79%|███████▉  | 584/739 [00:01<00:00, 303.28it/s]
 83%|████████▎ | 615/739 [00:02<00:00, 304.37it/s]
 87%|████████▋ | 646/739 [00:02<00:00, 300.39it/s]
 92%|█████████▏| 678/739 [00:02<00:00, 303.44it/s]
 96%|█████████▌| 709/739 [00:02<00:00, 303.38it/s]
100%|██████████| 739/739 [00:02<00:00, 300.68it/s]

  0%|          | 0/365 [00:00<?, ?it/s]
  8%|▊         | 31/365 [00:00<00:01, 308.23it/s]
 17%|█▋        | 62/365 [00:00<00:00, 305.35it/s]
 25%|██▌       | 93/365 [00:00<00:00, 303.27it/s]
 34%|███▍      | 124/365 [00:00<00:00, 302.81it/s]
 42%|████▏     | 155/365 [00:00<00:00, 305.18it/s]
 51%|█████     | 186/365 [00:00<00:00, 302.99it/s]
 59%|█████▉    | 217/365 [00:00<00:00, 299.53it/s]
 68%|██████▊   | 247/365 [00:00<00:00, 297.65it/s]
 76%|███████▌  | 278/365 [00:00<00:00, 300.82it/s]
 85%|████████▍ | 309/365 [00:01<00:00, 300.27it/s]
 93%|█████████▎| 340/365 [00:01<00:00, 299.05it/s]
100%|██████████| 365/365 [00:01<00:00, 301.16it/s]
Epoch:1, Accuracy:0.2279,Loss:1.863564254
Epoch:2, Accuracy:0.2812,Loss:1.714241055
Epoch:3, Accuracy:0.3442,Loss:1.663688969
Epoch:4, Accuracy:0.3273,Loss:1.640833629
Epoch:5, Accuracy:0.3598,Loss:1.627636008

Model inference

You need to define the pre-trained weight path in the predictor object’s pt_weight_path variable. Otherwise, the varibale will set to None and no weight will be loaded.

har_predictor = predictor(
    task='har',
    dataset_name='sleep_pose_net_dataset2',
    model_name='sleepposenet',
    pt_weights = './sleepposenet_weights.pth'
)
for data in test_loader:
    x_diff, x_wrtft, labels = data
    break
outputs = har_predictor.predict([x_diff, x_wrtft])
print("output:", outputs)
Pretrained weights loaded.
output: tensor([3, 3, 1, 4, 6, 3, 4, 4, 6, 1, 6, 4, 3, 6, 6, 6, 6, 3, 6, 6, 4, 3, 6, 2,
        3, 3, 4, 4, 4, 3, 6, 4, 3, 3, 6, 3, 1, 1, 3, 1, 6, 3, 4, 3, 3, 4, 3, 1,
        4, 6, 3, 4, 3, 1, 4, 6, 4, 4, 3, 1, 6, 4, 6, 1])

Generate embedding

  • noted that the model_name variable defined in load_model function represents the model structure name, and in load_pretrain_weights function represents the model structure and pretrain dataset name.

model = load_har_model(dataset_name = 'sleep_pose_net_dataset2', model_name = 'sleepposenet')
model = load_pretrain_weights(model, dataset_name = 'sleep_pose_net_dataset2', model_name = 'sleepposenet', device=device)
uwb_embedding = har_uwb_embedding(x_diff, x_wrtft, model, device)
print('uwb_embedding shape: ', uwb_embedding.shape)
uwb_embedding shape:  torch.Size([64, 256])

Total running time of the script: (0 minutes 11.735 seconds)

Gallery generated by Sphinx-Gallery

Docs

Access documentation for Pysensing

View Docs

Tutorials

Get started with tutorials and examples

View Tutorials

Get Started

Find resources and how to start using pysensing

View Resources