Open In Colab

DeepLabCut Model Zoo user-contributed models#

🚨 WARNING – This is using the old version from 2020-2023 with user-supplied models. Please see the SuperAnimal notebook if you want to use our Foundational Models for Quadrupeds or mice.

alt text

http://modelzoo.deeplabcut.org

You can use this notebook to analyze videos with pretrained networks from our model zoo - NO local installation of DeepLabCut is needed!

  • What you need: a video of your favorite dog, cat, human, etc: check the list of currently available models here: http://modelzoo.deeplabcut.org

  • What to do: (1) in the top right corner, click “CONNECT”. Then, just hit run (play icon) on each cell below and follow the instructions!

Please consider giving back and labeling a little data to help make each network even better!#

We have a WebApp, so no need to install anything, just a few clicks! We’d really appreciate your help!

https://contrib.deeplabcut.org/

  • Note, if you performance is less that you would like: firstly check the labeled_video parameters (i.e. “pcutoff” in the config.yaml file that will set the video plotting) - see the end of this notebook. You can also use the model in your own projects locally. Please be sure to cite the papers for the model, and http://modelzoo.deeplabcut.org (paper forthcoming!)

Let’s get going: install DeepLabCut into COLAB:#

Also, be sure you are connected to a GPU: go to menu, click Runtime > Change Runtime Type > select “GPU”

As the COLAB environments were updated to CUDA 12.X and Python 3.11, we need to install DeepLabCut and TensorFlow in a distinct way to get TensorFlow to connect to the GPU.

# Install TensorFlow, tensorpack and tf_slim versions compatible with DeepLabCut
!pip install "tensorflow==2.12.1" "tensorpack>=0.11" "tf_slim>=1.1.0"
# Downgrade PyTorch to a version using CUDA 11.8 and cudnn 8
# This will also install the required CUDA libraries, for both PyTorch and TensorFlow
!pip install torch==2.3.1 torchvision --index-url https://download.pytorch.org/whl/cu118
# Install the latest version of DeepLabCut
!pip install "git+https://github.com/DeepLabCut/DeepLabCut.git#egg=deeplabcut[modelzoo]"
# As described in https://www.tensorflow.org/install/pip#step-by-step_instructions, 
# create symbolic links to NVIDIA shared libraries:
!ln -svf /usr/local/lib/python3.11/dist-packages/nvidia/*/lib/*.so* /usr/local/lib/python3.11/dist-packages/tensorflow

Important - Restart the Runtime for the updated packages to be imported!#

PLEASE, click “restart runtime” from the output above before proceeding!

Now let’s set the backend & import the DeepLabCut package#

(if colab is buggy/throws an error, just rerun this cell):#

import os
import deeplabcut

Next, run the cell below to upload your video file from your computer:#

from google.colab import files

uploaded = files.upload()
for filepath, content in uploaded.items():
  print(f'User uploaded file "{filepath}" with length {len(content)} bytes')
video_path = os.path.abspath(filepath)

# If this cell fails (e.g., when using Safari in place of Google Chrome),
# manually upload your video via the Files menu to the left
# and define `video_path` yourself with right click > copy path on the video.

Select your model from the dropdown menu, then below (optionally) input the name you want for the project:#

import ipywidgets as widgets
from IPython.display import display

model_options = deeplabcut.create_project.modelzoo.Modeloptions
model_selection = widgets.Dropdown(
    options=model_options,
    value=model_options[0],
    description="Choose a DLC ModelZoo model!",
    disabled=False
)
display(model_selection)
project_name = 'myDLC_modelZoo'
your_name = 'teamDLC'
model2use = model_selection.value
videotype = os.path.splitext(video_path)[-1].lstrip('.') #or MOV, or avi, whatever you uploaded!

Attention on this step !!#

  • Please note that for optimal performance your videos should contain frames that are around ~300-600 pixels (on one edge). If you have a larger video (like from an iPhone, first downsize by running this please! :)

  • Thus, if you’re using an iPhone, or such, you’ll need to downsample the video first by running the code below**

(no need to edit it unless you want to change the size)

video_path = deeplabcut.DownSampleVideo(video_path, width=300)
print(video_path)

Lastly, run the cell below to create a pretrained project, analyze your video with your selected pretrained network, plot trajectories, and create a labeled video!:#

config_path, train_config_path = deeplabcut.create_pretrained_project(
    project_name,
    your_name,
    [video_path],
    videotype=videotype,
    model=model2use,
    analyzevideo=True,
    createlabeledvideo=True,
    copy_videos=True, #must leave copy_videos=True
    engine=deeplabcut.Engine.TF,
)

Now, you can move this project from Colab (i.e. download it to your GoogleDrive), and use it like a normal standard project!

You can analyze more videos, extract outliers, refine then, and/or then add new key points + label new frames, and retrain if desired. We hope this gives you a good launching point for your work!

###Happy DeepLabCutting! Welcome to the Zoo :)

More advanced options:#

  • If you would now like to customize the video/plots - i.e., color, dot size, threshold for the point to be plotted (pcutoff), please simply edit the “config.yaml” file by updating the values below:

# Updating the plotting within the config.yaml file (without opening it ;):
edits = {
    'dotsize': 7,  # size of the dots!
    'colormap': 'spring',  # any matplotlib colormap!
    'pcutoff': 0.5,  # the higher the more conservative the plotting!
}
deeplabcut.auxiliaryfunctions.edit_config(config_path, edits)
# re-create the labeled video (first you will need to delete in the folder to the LEFT!):
project_path = os.path.dirname(config_path)
full_video_path = os.path.join(
    project_path,
    'videos',
    os.path.basename(video_path),
)

#filter predictions (should already be done above ;):
deeplabcut.filterpredictions(config_path, [full_video_path], videotype=videotype)

#re-create the video with your edits!
deeplabcut.create_labeled_video(config_path, [full_video_path], videotype=videotype, filtered=True)