.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "acoustic/acoustic_hgr_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_acoustic_acoustic_hgr_tutorial.py: Acoustic Hand Gesture Recognition Tutorial ============================================================== .. GENERATED FROM PYTHON SOURCE LINES 7-8 !pip install pysensing .. GENERATED FROM PYTHON SOURCE LINES 10-12 In this tutorial, we will be implementing codes for acoustic Hand Gesture Recognition .. GENERATED FROM PYTHON SOURCE LINES 12-23 .. code-block:: Python import torch import torchaudio import matplotlib.pyplot as plt from torch.utils.data import DataLoader import os import numpy as np import torch.nn as nn import tqdm import sys from pysensing.acoustic.datasets.hgr import AMG device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') .. GENERATED FROM PYTHON SOURCE LINES 24-33 Hand Gesture Recognition with Acoustic Myography and Wavelet Scattering Transform ----------------------------------- Reimplementation of "Hand Gesture Recognition with Acoustic Myography and Wavelet Scattering Transform" This dataset contains acoustic myography of different hand gestures. The aucoustic data is in 8 channel. In this library, subjects AA01, CU14, DH18, NL20, NM08 and SR11 are selected as testing data, while the remaining used for training. The classes contains in the dataset are [Pronation, Supination, Wrist Flexion, Wrist Extension, Radial Deviation, Ulnar Deviation, Hand close, Hand open, Hook grip, Fine pinch, Tripod grip, Index finger flexion, Thumb finger flexion, and No movement (Rest)] .. GENERATED FROM PYTHON SOURCE LINES 35-38 Load the data ------------------------ Method 1: Use get_dataloader .. GENERATED FROM PYTHON SOURCE LINES 38-60 .. code-block:: Python from pysensing.acoustic.datasets.get_dataloader import * train_loader,test_loader = load_hgr_dataset( root='./data', download=True) # Method 2: Manually setup the dataloader root = './data' # The path contains the samosa dataset amg_traindataset = AMG(root,'train') amg_testdataset = AMG(root,'test') # Define the Dataloader amg_trainloader = DataLoader(amg_traindataset,batch_size=32,shuffle=False,drop_last=True) amg_testloader = DataLoader(amg_testdataset,batch_size=32,shuffle=False,drop_last=True) #List the activity classes in the dataset dataclass = amg_traindataset.class_dict # Example of the samples in the dataset index = 128 # Randomly 3elect an index spectrogram,activity= amg_traindataset.__getitem__(index) plt.figure(figsize=(10,6)) plt.imshow(spectrogram.numpy()[0]) plt.title("Spectrogram for activity: {}".format(activity)) plt.show() .. image-sg:: /acoustic/images/sphx_glr_acoustic_hgr_tutorial_001.png :alt: Spectrogram for activity: 2 :srcset: /acoustic/images/sphx_glr_acoustic_hgr_tutorial_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none using dataset: AMG .. GENERATED FROM PYTHON SOURCE LINES 61-62 And that's it. We're done with our acoustic hand gesture recognition tutorials. Thanks for reading. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 1.226 seconds) .. _sphx_glr_download_acoustic_acoustic_hgr_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: acoustic_hgr_tutorial.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: acoustic_hgr_tutorial.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_