.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "acoustic/acoustic_har_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_acoustic_acoustic_har_tutorial.py: Acoustic Human Activity Recognition Tutorial ============================================================== .. GENERATED FROM PYTHON SOURCE LINES 7-8 !pip install pysensing .. GENERATED FROM PYTHON SOURCE LINES 10-12 In this tutorial, we will be implementing codes for acoustic Human activity recognition .. GENERATED FROM PYTHON SOURCE LINES 12-27 .. code-block:: Python import torch torch.backends.cudnn.benchmark = True import matplotlib.pyplot as plt import numpy as np import pysensing.acoustic.datasets.har as har_datasets import pysensing.acoustic.models.har as har_models import pysensing.acoustic.models.get_model as acoustic_models import pysensing.acoustic.inference.embedding as embedding seed = 42 torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) np.random.seed(seed) device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') .. GENERATED FROM PYTHON SOURCE LINES 28-35 SAMoSA: Sensoring Activities with Motion abd Subsampled Audio ----------------------------------- SAMSoSA dataset is designed to use audio and IMU data collected by a watch to predict the actions of the users. There are totally 27 actions in the dataset. In the library, we provide a dataloader to use only audio data to predict these actions. .. GENERATED FROM PYTHON SOURCE LINES 37-39 Load the data ------------------------ .. GENERATED FROM PYTHON SOURCE LINES 39-63 .. code-block:: Python # Method 1: Use get_dataloader from pysensing.acoustic.datasets.get_dataloader import * train_loader,test_loader = load_har_dataset( root='./data', dataset='samosa', download=True) # Method 2: Manually setup the dataloader root = './data' # The path contains the samosa dataset samosa_traindataset = har_datasets.SAMoSA(root,'train') samosa_testdataset = har_datasets.SAMoSA(root,'test') # Define the dataloader samosa_trainloader = DataLoader(samosa_traindataset,batch_size=64,shuffle=True,drop_last=True) samosa_testloader = DataLoader(samosa_testdataset,batch_size=64,shuffle=True,drop_last=True) dataclass = samosa_traindataset.class_dict datalist = samosa_traindataset.audio_data # Example of the samples in the dataset index = 50 # Randomly select an index spectrogram,activity= samosa_traindataset.__getitem__(index) plt.figure(figsize=(10,5)) plt.imshow(spectrogram.numpy()[0]) plt.title("Spectrogram for activity: {}".format(activity)) plt.show() .. image-sg:: /acoustic/images/sphx_glr_acoustic_har_tutorial_001.png :alt: Spectrogram for activity: 0 :srcset: /acoustic/images/sphx_glr_acoustic_har_tutorial_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none using dataset: SAMoSA .. GENERATED FROM PYTHON SOURCE LINES 64-66 Load the model ------------------------ .. GENERATED FROM PYTHON SOURCE LINES 66-73 .. code-block:: Python # Method 1: samosa_model = har_models.HAR_SAMCNN(dropout=0.6).to(device) # Method 2: samosa_model = acoustic_models.load_har_model('samcnn',pretrained=True).to(device) .. GENERATED FROM PYTHON SOURCE LINES 74-76 Model training and testing ------------------------ .. GENERATED FROM PYTHON SOURCE LINES 76-86 .. code-block:: Python from pysensing.acoustic.inference.training.har_train import * # Model training epoch = 1 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(samosa_model.parameters(), 0.0001) har_train_val(samosa_model,samosa_trainloader,samosa_testloader, epoch, optimizer, criterion, device, save_dir = './data',save = True) # Model testing test_loss = har_test(samosa_model,samosa_testloader,criterion,device) .. rst-class:: sphx-glr-script-out .. code-block:: none Train round0/1: 0%| | 0/78 [00:00` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: acoustic_har_tutorial.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_