.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "mmwave_PC/mmwave_PC_hgr_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_mmwave_PC_mmwave_PC_hgr_tutorial.py: Tutorial for Human Gesture Recognition ============================================================== .. GENERATED FROM PYTHON SOURCE LINES 8-10 !/usr/bin/env python coding: utf-8 .. GENERATED FROM PYTHON SOURCE LINES 12-13 In[1]: .. GENERATED FROM PYTHON SOURCE LINES 13-21 .. code-block:: Python import yaml import torch import torch.nn as nn from tqdm import tqdm import os .. GENERATED FROM PYTHON SOURCE LINES 22-24 Dataset with M-Gesture: ------------------------ .. GENERATED FROM PYTHON SOURCE LINES 26-29 Point cloud gesture dataset collected using FMCW mmWave Radar, TI-IWR1443 single-chip 76-GHz to 81-GHz mmWave sensor evaluation module. 2 scenarios are included: short range (i.e. Human-Radar Distance(HRD) < 0.5 m) and long range (i.e. 2m < HRD < 5m); Only long-range gesture recognition is supported as only long-range dataset contain point cloud data. .. GENERATED FROM PYTHON SOURCE LINES 31-33 Load the data ------------------------ .. GENERATED FROM PYTHON SOURCE LINES 35-36 In[2]: .. GENERATED FROM PYTHON SOURCE LINES 36-42 .. code-block:: Python from pysensing.mmwave.PC.dataset.hgr import load_hgr_dataset # The path contains the radHAR dataset train_dataset, test_dataset = load_hgr_dataset("M-Gesture") .. rst-class:: sphx-glr-script-out .. code-block:: none Try to download M-Gesture dateset in /home/kemove/yyz/av-gihub/tutorials/mmwave_PC_source/mGesture Downloading M-Gesture to /home/kemove/yyz/av-gihub/tutorials/mmwave_PC_source/mGesture.zip... Downloading: 0%| | 0.00/220M [00:00 .. GENERATED FROM PYTHON SOURCE LINES 59-61 Create model ------------------------ .. GENERATED FROM PYTHON SOURCE LINES 63-64 M-Gesture utilizes CNN-based model, EVL_NN with feature engineering module called RPM as the baseline hgr method. From model.hgr, we can import desired hgr model designed for mmWave PC. The model parameter for EVL_NN reimplemented for M-Gesture is as follows: .. GENERATED FROM PYTHON SOURCE LINES 66-67 In[4]: .. GENERATED FROM PYTHON SOURCE LINES 67-73 .. code-block:: Python from pysensing.mmwave.PC.model.hgr import EVL_NN model = EVL_NN(dataset="M-Gesture", num_classes=4) print(model) .. rst-class:: sphx-glr-script-out .. code-block:: none EVL_NN( (C1): Sequential( (0): Conv2d(1, 32, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3)) ) (M): MaxPool2d(kernel_size=(3, 3), stride=(2, 2), padding=(0, 1), dilation=1, ceil_mode=False) (E1): Sequential( (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (E2): Sequential( (0): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (F): Sequential( (0): Linear(in_features=1792, out_features=256, bias=True) ) (classifier): Sequential( (0): Linear(in_features=256, out_features=4, bias=True) (1): Softmax(dim=-1) ) ) .. GENERATED FROM PYTHON SOURCE LINES 74-76 Model Train ------------------------ .. GENERATED FROM PYTHON SOURCE LINES 78-79 pysensing library support quick training of model with the following steps. The training interface incorporates pytorch loss functions, optimizers and dataloaders to facilate training. An example is provided for how to define the aforemetioned terms. .. GENERATED FROM PYTHON SOURCE LINES 81-82 In[5]: .. GENERATED FROM PYTHON SOURCE LINES 82-100 .. code-block:: Python # Create pytorch dataloaders train_loader = torch.utils.data.DataLoader(train_dataset, shuffle=True, batch_size=128, num_workers=16) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=128, shuffle=False, num_workers=16) # Define pytorch loss function as criterion criterion = nn.CrossEntropyLoss() # Define pytorch optimizer for training optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) # GPU acceleration with cuda device = torch.device("cuda" if torch.cuda.is_available() else "cpu") .. GENERATED FROM PYTHON SOURCE LINES 101-102 A quick training using har_train. The resulted model parameters will be saved into "train_{num_epochs}.pth". .. GENERATED FROM PYTHON SOURCE LINES 104-105 In[6]: .. GENERATED FROM PYTHON SOURCE LINES 105-112 .. code-block:: Python # Pysensing training interface from pysensing.mmwave.PC.inference.hgr import hgr_train # hgr_train(model, train_loader, num_epochs=1, optimizer=optimizer, criterion=criterion, device=device) .. GENERATED FROM PYTHON SOURCE LINES 113-115 Model inference ------------------------ .. GENERATED FROM PYTHON SOURCE LINES 117-119 Load the pretrained model, e.g. from https://pysensing.oss-ap-southeast-1.aliyuncs.com/pretrain/mmwave_pc/HGR/M-Gesture_EVL_NN.pth , and perform human gesture recognition! .. GENERATED FROM PYTHON SOURCE LINES 121-122 In[7]: .. GENERATED FROM PYTHON SOURCE LINES 122-128 .. code-block:: Python # load pretrained model from pysensing.mmwave.PC.inference import load_pretrain model = load_pretrain(model, "M-Gesture", "EVL_NN").to(device) model.eval() .. rst-class:: sphx-glr-script-out .. code-block:: none Use pretrained model! EVL_NN( (C1): Sequential( (0): Conv2d(1, 32, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3)) ) (M): MaxPool2d(kernel_size=(3, 3), stride=(2, 2), padding=(0, 1), dilation=1, ceil_mode=False) (E1): Sequential( (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (E2): Sequential( (0): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (F): Sequential( (0): Linear(in_features=1792, out_features=256, bias=True) ) (classifier): Sequential( (0): Linear(in_features=256, out_features=4, bias=True) (1): Softmax(dim=-1) ) ) .. GENERATED FROM PYTHON SOURCE LINES 129-130 Test the model on testing dataset. .. GENERATED FROM PYTHON SOURCE LINES 132-133 In[8]: .. GENERATED FROM PYTHON SOURCE LINES 133-136 .. code-block:: Python from pysensing.mmwave.PC.inference.hgr import hgr_test # hgr_test(model, test_loader, criterion=criterion, device=device) .. GENERATED FROM PYTHON SOURCE LINES 137-138 Model inference on sample and deep feature embedding of input modality in HGR task. .. GENERATED FROM PYTHON SOURCE LINES 140-141 In[9]: .. GENERATED FROM PYTHON SOURCE LINES 141-154 .. code-block:: Python idx = 5 pc, label= test_dataset.__getitem__(idx) print(pc.shape) pc = torch.tensor(pc).unsqueeze(0).float().to(device) predicted_result = model(pc) print("The predicted gesture is {}, while the ground truth is {}".format(label,torch.argmax(predicted_result).cpu())) # Deep feature embedding from pysensing.mmwave.PC.inference.embedding import embedding emb = embedding(input = pc, model=model, dataset_name = "M-Gesture", model_name = "EVL_NN", device=device) print("The shape of feature embedding is: ", emb.shape) .. rst-class:: sphx-glr-script-out .. code-block:: none (28, 50, 5) The predicted gesture is 0, while the ground truth is 0 The shape of feature embedding is: torch.Size([256]) .. rst-class:: sphx-glr-timing **Total running time of the script:** (3 minutes 9.096 seconds) .. _sphx_glr_download_mmwave_PC_mmwave_PC_hgr_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: mmwave_PC_hgr_tutorial.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: mmwave_PC_hgr_tutorial.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_