.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "mmwave_raw/mmwave_raw_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_mmwave_raw_mmwave_raw_tutorial.py: This notebook is a tutroial for using the mmwave raw data before the FFT ( Time, chirps, virtual antennas, virtual antenna per chirp) specified in cubelearn https://github.com/zhaoymn/cubelearn .. GENERATED FROM PYTHON SOURCE LINES 12-15 Data loading and prepocessing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. GENERATED FROM PYTHON SOURCE LINES 15-41 .. code-block:: Python import numpy as np # loading example data sample from the cubelearn HGR data # {user}_{gesture(label)}_{idx}.npy} # data have shape size (2, T, 128, 12, 256), where 2 is the real and complex part of the raw data, # T is the timestamps (10 for HGR and AGR, 20 for HAR), 128 is the number of chirps in a frame, 12 is the virtual antennas # https://github.com/zhaoymn/cubelearn?tab=readme-ov-file user = 7 label = 2 sample = 1 #replace with your data path, please download and unzip data from https://github.com/zhaoymn/cubelearn?tab=readme-ov-file #HAR data path should be .../HAR_data/activity_organized/{user}_{label}_{sample}.npy raw_data = np.load(f'./{user}_{label}_{sample}.npy') #combine the real and complex part data = raw_data[0, :, :, :, :] + raw_data[1,:,:,:,:] * 1j #DAT and RDAT models takes partial input for efficiency, skip this in other model data = data[:,:64,:,:128] #Data type is complex64 data = np.array(data, dtype=np.complex64) .. GENERATED FROM PYTHON SOURCE LINES 42-45 model loading and inference ~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. GENERATED FROM PYTHON SOURCE LINES 45-79 .. code-block:: Python import torch import requests from pysensing.mmwave_raw.models.network import DAT_2DCNNLSTM # URL of the pretrained model pretrained_model_url = "https://pysensing.oss-ap-southeast-1.aliyuncs.com/pretrain/mmwave_raw/HAR/DAT_2DCNNLSTM_HAR.pth" # */pretrain/modality/task/model_name.pth # modelname = {DAT_2DCNNLSTM_HAR,DAT_2DCNNLSTM_AGR,DAT_2DCNNLSTM_HGR,RDAT_3DCNNLSTM_HAR,RDAT_3DCNNLSTM_AGR,RDAT_3DCNNLSTM_HGR} local_model_path = "./DAT_2DCNNLSTM_HAR.pth" # Download the pretrained weights response = requests.get(pretrained_model_url) with open(local_model_path, "wb") as f: f.write(response.content) #loading the model and pretrained weight model = DAT_2DCNNLSTM(HAR=True) model.load_state_dict(torch.load(local_model_path, weights_only=True)['model_state_dict']) model.eval() #convert data to torch tensor data = torch.tensor(data) #unsqueeze for the batch dimension x = data.unsqueeze(0) one_hot = model(x) #class prediction class_idx = torch.argmax(one_hot) print(f"The prediction is {class_idx==label}") .. rst-class:: sphx-glr-script-out .. code-block:: none The prediction is True .. GENERATED FROM PYTHON SOURCE LINES 80-83 Embedding extraction -------------------- .. GENERATED FROM PYTHON SOURCE LINES 86-88 For lstm models the embedding is extracted after the lstm (recommened) .. GENERATED FROM PYTHON SOURCE LINES 88-96 .. code-block:: Python from pysensing.mmwave_raw.inference.embedding import embedding emb = embedding(x,model,'cpu',True) print(emb.shape) .. rst-class:: sphx-glr-script-out .. code-block:: none torch.Size([1, 20, 512]) .. GENERATED FROM PYTHON SOURCE LINES 97-101 For non-lstm model the embedding is extracted after the final max pooling layer berfore the FCs, might have different shape for different models .. GENERATED FROM PYTHON SOURCE LINES 101-111 .. code-block:: Python from pysensing.mmwave_raw.models.network import DAT_3DCNN model_ = DAT_3DCNN() emb = embedding(x,model_,'cpu',False) print(emb.shape) .. rst-class:: sphx-glr-script-out .. code-block:: none torch.Size([16, 4, 7, 7]) .. GENERATED FROM PYTHON SOURCE LINES 112-114 for non DAT and RDAT models don’t forget to use the whole data .. GENERATED FROM PYTHON SOURCE LINES 114-124 .. code-block:: Python from pysensing.mmwave_raw.models.network import RAT_3DCNN model_ = RAT_3DCNN() data_ = raw_data[0, :, :, :, :] + raw_data[1,:,:,:,:] * 1j data_ = data_[:,:128,:,:256] #whole data cube data_ = np.array(data_, dtype=np.complex64) data_ = torch.tensor(data_) x_ = data_.unsqueeze(0) emb = embedding(x_,model_,'cpu',False) print(emb.shape) .. rst-class:: sphx-glr-script-out .. code-block:: none torch.Size([16, 4, 31, 7]) .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 19.856 seconds) .. _sphx_glr_download_mmwave_raw_mmwave_raw_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: mmwave_raw_tutorial.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: mmwave_raw_tutorial.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_