Open In Colab

DeepLabCut SuperAnimal models#

alt text

http://modelzoo.deeplabcut.org

You can use this notebook to analyze videos with pretrained networks from our model zoo - NO local installation of DeepLabCut is needed!

  • What you need: a video of your favorite dog, cat, human, etc: check the list of currently available models here: http://modelzoo.deeplabcut.org

  • What to do: (1) in the top right corner, click “CONNECT”. Then, just hit run (play icon) on each cell below and follow the instructions!

  • Note, if you performance is less that you would like: firstly check the labeled_video parameters (i.e. “pcutoff” that will set the video plotting) - see the end of this notebook.

  • You can also use the model in your own projects locally. Please be sure to cite the papers for the model, i.e., Ye et al. 2024 🎉

Let’s get going: install DeepLabCut into COLAB:#

Also, be sure you are connected to a GPU: go to menu, click Runtime > Change Runtime Type > select “GPU”

!pip install --pre deeplabcut

PLEASE, click “restart runtime” from the output above before proceeding!#

from pathlib import Path

import deeplabcut

Please select a video you want to run SuperAnimal-X on:#

from google.colab import files

uploaded = files.upload()
for filepath, content in uploaded.items():
  print(f'User uploaded file "{filepath}" with length {len(content)} bytes')

video_path = Path(filepath).resolve()

# If this cell fails (e.g., when using Safari in place of Google Chrome),
# manually upload your video via the Files menu to the left
# and define `video_path` yourself with right click > copy path on the video.

Next select the model you want to use, Quadruped or TopViewMouse#

  • See http://modelzoo.deeplabcut.org/ for more details on these models

  • The pcutoff is for visualization only, namely only keypoints with a value over what you set are shown. 0 is low confidience, 1 is perfect confidience of the model.

superanimal_name = "superanimal_quadruped" #@param ["superanimal_topviewmouse", "superanimal_quadruped"]
model_name = "hrnet_w32" #@param ["hrnet_w32", "resnet_50"]
detector_name = "fasterrcnn_resnet50_fpn_v2" #@param ["fasterrcnn_resnet50_fpn_v2", "fasterrcnn_mobilenet_v3_large_fpn"]
pcutoff = 0.15 #@param {type:"slider", min:0, max:1, step:0.05}

Okay, let’s go! 🐭🦓🐻#

videotype = video_path.suffix
scale_list = []

deeplabcut.video_inference_superanimal(
    [video_path],
    superanimal_name,
    model_name=model_name,
    detector_name=detector_name,
    videotype=videotype,
    video_adapt=True,
    scale_list=scale_list,
    pcutoff=pcutoff,
)

Let’s view the video in Colab:#

  • otherwise, you can download and look at the video from the left side of your screen! It will end with _labeled.mp4

  • If your data doesn’t work as well as you’d like, consider fine-tuning our model on your data, changing the pcutoff, changing the scale-range (pick values smaller and larger than your video image input size). See our repo for more details.

from base64 import b64encode
from IPython.display import HTML
import glob

# Get the parent directory and stem (filename without extension)
directory = video_path.parent
basename = video_path.stem

# Build the pattern
# This uses '*' to allow for any characters between the fixed parts
pattern = f"{basename}*{superanimal_name}*{detector_name}*{model_name}*_labeled_after_adapt.mp4"

# Search for matching files
matches = list(directory.glob(pattern))

# Choose the first match if it exists
labeled_video_path = matches[0] if matches else None

view_video = open(labeled_video_path, "rb").read()

data_url = "data:video/mp4;base64," + b64encode(view_video).decode()
HTML("""
<video width=600 controls>
      <source src="%s" type="video/mp4">
</video>
""" % data_url)