Open In Colab

Demo: How to use our Pose Transformer for unsupervised identity tracking of animals#

alt text

DeepLabCut/DeepLabCut

This notebook illustrates how to use the transformer for a multi-animal DeepLabCut (maDLC) Demo tri-mouse project:#

  • load our mini-demo data that includes a pretrained model and unlabeled video.

  • analyze a novel video.

  • use the transformer to do unsupervised ID tracking.

  • create quality check plots and video.

To create a full maDLC pipeline please see our full docs: https://deeplabcut.github.io/DeepLabCut/README.html#

To get started, please go to β€œRuntime” ->”change runtime type”->select β€œPython3”, and then select β€œGPU”#

As the COLAB environments were updated to CUDA 12.X and Python 3.11, we need to install DeepLabCut and TensorFlow in a distinct way to get TensorFlow to connect to the GPU.

‼️ Attention: this demo is for maDLC, which is version 2.2

  • the installation is very slow on Colab due to the several steps needed to use older versions of torch and dlc.

# Downgrade PyTorch to a version using CUDA 11.8 and cudnn 8
# This will also install the required CUDA libraries, for both PyTorch and TensorFlow
!pip install torch==2.3.1 torchvision --index-url https://download.pytorch.org/whl/cu118
import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))
!nvcc --version  # sometimes not available
True
NVIDIA A100-SXM4-40GB
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Jun__6_02:18:23_PDT_2024
Cuda compilation tools, release 12.5, V12.5.82
Build cuda_12.5.r12.5/compiler.34385749_0
# Install TensorFlow, tensorpack and tf_slim versions compatible with DeepLabCut
!pip install "tensorflow==2.12.1" "tensorpack>=0.11" "tf_slim>=1.1.0"
# As described in https://www.tensorflow.org/install/pip#step-by-step_instructions,
# create symbolic links to NVIDIA shared libraries:
!ln -svf /usr/local/lib/python3.11/dist-packages/nvidia/*/lib/*.so* /usr/local/lib/python3.11/dist-packages/tensorflow
# Install DLC version 2.2-2.3 (pre DLC3):
!pip install deeplabcut==2.3.11
import tensorflow
import deeplabcut
import torch
import os
print("DLC version: ", deeplabcut.__version__)
print("torch version: ",torch.__version__)
print("tensorflow version: ",tensorflow.__version__)
DLC version:  2.3.11
torch version:  2.3.1+cu118
tensorflow version:  2.12.1

Important - Restart the Runtime for the updated packages to be imported!#

PLEASE, click β€œrestart runtime” from the output above before proceeding!

No information needs edited in the cells below, you can simply click run on each:

Download our Demo Project from our server:#

# Download our demo project:
import requests
from io import BytesIO
from zipfile import ZipFile

url_record = "https://zenodo.org/api/records/7883589"
response = requests.get(url_record)
if response.status_code == 200:
    file = response.json()["files"][0]
    title = file["key"]
    print(f"Downloading {title}...")
    with requests.get(file["links"]["self"], stream=True) as r:
        with ZipFile(BytesIO(r.content)) as zf:
            zf.extractall(path="/content")
else:
    raise ValueError(f"The URL {url_record} could not be reached.")
Downloading demo-me-2021-07-14.zip...

Analyze a novel 3 mouse video with our maDLC DLCRNet, pretrained on 3 mice data#

In one step, since auto_track=True you extract detections and association costs, create tracklets, & stitch them. We can use this to compare to the transformer-guided tracking below.

project_path = "/content/demo-me-2021-07-14"
config_path = os.path.join(project_path, "config.yaml")
video = os.path.join(project_path, "videos", "videocompressed1.mp4")
deeplabcut.analyze_videos(config_path,[video],
                          shuffle=0, videotype="mp4",
                          auto_track=True)
Using snapshot-20000 for model /content/demo-me-2021-07-14/dlc-models/iteration-0/demoJul14-trainset95shuffle0
/usr/local/lib/python3.11/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:1694: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  warnings.warn('`layer.apply` is deprecated and '
Activating extracting of PAFs
Starting to analyze %  /content/demo-me-2021-07-14/videos/videocompressed1.mp4
Loading  /content/demo-me-2021-07-14/videos/videocompressed1.mp4
Duration of video [s]:  77.67 , recorded with  30.0 fps!
Overall # of frames:  2330  found with (before cropping) frame dimensions:  640 480
Starting to extract posture from the video(s) with batchsize: 8
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2330/2330 [00:39<00:00, 58.83it/s]
Video Analyzed. Saving results in /content/demo-me-2021-07-14/videos...
/usr/local/lib/python3.11/dist-packages/deeplabcut/utils/auxfun_multianimal.py:83: UserWarning: default_track_method` is undefined in the config.yaml file and will be set to `ellipse`.
  warnings.warn(
Using snapshot-20000 for model /content/demo-me-2021-07-14/dlc-models/iteration-0/demoJul14-trainset95shuffle0
Processing...  /content/demo-me-2021-07-14/videos/videocompressed1.mp4
Analyzing /content/demo-me-2021-07-14/videos/videocompressed1DLC_dlcrnetms5_demoJul14shuffle0_20000.h5
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2330/2330 [00:02<00:00, 1088.72it/s]
2330it [00:06, 342.29it/s] 
The tracklets were created (i.e., under the hood deeplabcut.convert_detections2tracklets was run). Now you can 'refine_tracklets' in the GUI, or run 'deeplabcut.stitch_tracklets'.
Processing...  /content/demo-me-2021-07-14/videos/videocompressed1.mp4
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:00<00:00, 1488.53it/s]
/usr/local/lib/python3.11/dist-packages/deeplabcut/refine_training_dataset/stitch.py:934: FutureWarning: Starting with pandas version 3.0 all arguments of to_hdf except for the argument 'path_or_buf' will be keyword-only.
  df.to_hdf(output_name, "tracks", format="table", mode="w")
The videos are analyzed. Time to assemble animals and track 'em... 
 Call 'create_video_with_all_detections' to check multi-animal detection quality before tracking.
If the tracking is not satisfactory for some videos, consider expanding the training set. You can use the function 'extract_outlier_frames' to extract a few representative outlier frames.
'DLC_dlcrnetms5_demoJul14shuffle0_20000'

Next, you compute the local, spatio-temporal grouping and track body part assemblies frame-by-frame:#

Create a pretty video output:#

#Filter the predictions to remove small jitter, if desired:
deeplabcut.filterpredictions(config_path, [video], shuffle=0, videotype="mp4")
deeplabcut.create_labeled_video(
    config_path,
    [video],
    videotype="mp4",
    shuffle=0,
    color_by="individual",
    keypoints_only=False,
    draw_skeleton=True,
    filtered=True,
)
Filtering with median model /content/demo-me-2021-07-14/videos/videocompressed1.mp4
Saving filtered csv poses!
/usr/local/lib/python3.11/dist-packages/deeplabcut/post_processing/filtering.py:298: FutureWarning: Starting with pandas version 3.0 all arguments of to_hdf except for the argument 'path_or_buf' will be keyword-only.
  data.to_hdf(outdataname, "df_with_missing", format="table", mode="w")
Starting to process video: /content/demo-me-2021-07-14/videos/videocompressed1.mp4
Loading /content/demo-me-2021-07-14/videos/videocompressed1.mp4 and data.
Duration of video [s]: 77.67, recorded with 30.0 fps!
Overall # of frames: 2330 with cropped frame dimensions: 640 480
Generating frames and creating video.
/usr/local/lib/python3.11/dist-packages/deeplabcut/utils/make_labeled_video.py:140: FutureWarning: DataFrame.groupby with axis=1 is deprecated. Do `frame.T.groupby(...)` without axis instead.
  Dataframe.groupby(level="individuals", axis=1).size().values // 3
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2330/2330 [00:31<00:00, 73.04it/s]
[True]

Now, on the left panel if you click the folder icon, you will see the project folder β€œdemo-me..”; click on this and go into β€œvideos” and you can find the β€œβ€¦_id_labeled.mp4” video, which you can double-click on to download and inspect!

Create Plots of your data:#

after running, you can look in β€œvideos”, β€œplot-poses” to check out the trajectories! (sometimes you need to click the folder refresh icon to see it). Within the folder, for example, see plotmus1.png to vide the bodyparts over time vs. pixel position.

deeplabcut.plot_trajectories(config_path, [video], shuffle=0,videotype="mp4")
Loading  /content/demo-me-2021-07-14/videos/videocompressed1.mp4 and data.
Plots created! Please check the directory "plot-poses" within the video directory

Transformer for reID#

while the tracking here is very good without using the transformer, we want to demo the workflow for you!

deeplabcut.transformer_reID(
    config_path,
    [video],
    shuffle=0,
    videotype="mp4",
    track_method="ellipse",
    n_triplets=100,
)
Using snapshot-20000 for model /content/demo-me-2021-07-14/dlc-models/iteration-0/demoJul14-trainset95shuffle0
/usr/local/lib/python3.11/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:1694: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  warnings.warn('`layer.apply` is deprecated and '
/usr/local/lib/python3.11/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:1694: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  warnings.warn('`layer.apply` is deprecated and '
/usr/local/lib/python3.11/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:1694: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  warnings.warn('`layer.apply` is deprecated and '
/usr/local/lib/python3.11/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:1694: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  warnings.warn('`layer.apply` is deprecated and '
Activating extracting of PAFs
Starting to analyze %  /content/demo-me-2021-07-14/videos/videocompressed1.mp4
Loading  /content/demo-me-2021-07-14/videos/videocompressed1.mp4
Duration of video [s]:  77.67 , recorded with  30.0 fps!
Overall # of frames:  2330  found with (before cropping) frame dimensions:  640 480
Starting to extract posture
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2330/2330 [01:18<00:00, 29.78it/s]
If the tracking is not satisfactory for some videos, consider expanding the training set. You can use the function 'extract_outlier_frames' to extract a few representative outlier frames.
Epoch 10, train acc: 0.61
Epoch 10, test acc 0.45
Epoch 20, train acc: 0.74
Epoch 20, test acc 0.65
Epoch 30, train acc: 0.78
Epoch 30, test acc 0.55
Epoch 40, train acc: 0.76
Epoch 40, test acc 0.50
Epoch 50, train acc: 0.85
Epoch 50, test acc 0.55
Epoch 60, train acc: 0.84
Epoch 60, test acc 0.60
Epoch 70, train acc: 0.85
Epoch 70, test acc 0.55
Epoch 80, train acc: 0.79
Epoch 80, test acc 0.55
Epoch 90, train acc: 0.88
Epoch 90, test acc 0.55
Epoch 100, train acc: 0.84
Epoch 100, test acc 0.55
loading params
Processing...  /content/demo-me-2021-07-14/videos/videocompressed1.mp4
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:00<00:00, 483.21it/s]
/usr/local/lib/python3.11/dist-packages/deeplabcut/refine_training_dataset/stitch.py:934: FutureWarning: Starting with pandas version 3.0 all arguments of to_hdf except for the argument 'path_or_buf' will be keyword-only.
  df.to_hdf(output_name, "tracks", format="table", mode="w")

now we can make another video with the transformer-guided tracking:

deeplabcut.plot_trajectories(
    config_path,
    [video],
    shuffle=0,
    videotype="mp4",
    track_method="transformer",
)
Loading  /content/demo-me-2021-07-14/videos/videocompressed1.mp4 and data.
Plots created! Please check the directory "plot-poses" within the video directory
deeplabcut.create_labeled_video(
    config_path,
    [video],
    videotype="mp4",
    shuffle=0,
    color_by="individual",
    keypoints_only=False,
    draw_skeleton=True,
    track_method="transformer"
)
Starting to process video: /content/demo-me-2021-07-14/videos/videocompressed1.mp4
Loading /content/demo-me-2021-07-14/videos/videocompressed1.mp4 and data.
Duration of video [s]: 77.67, recorded with 30.0 fps!
Overall # of frames: 2330 with cropped frame dimensions: 640 480
Generating frames and creating video.
/usr/local/lib/python3.11/dist-packages/deeplabcut/utils/make_labeled_video.py:140: FutureWarning: DataFrame.groupby with axis=1 is deprecated. Do `frame.T.groupby(...)` without axis instead.
  Dataframe.groupby(level="individuals", axis=1).size().values // 3
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2330/2330 [00:31<00:00, 73.75it/s]
[True]