Open In Colab

DeepLabCut on Single Mouse Data Demo#

Some useful links:

alt text

Demo supporting: Nath*, Mathis* et al. *Using DeepLabCut for markerless3D pose estimation during behavior across species. Nature Protocols, 2019

This notebook demonstrates the necessary steps to use DeepLabCut on our demo data. We provide a sub-set of the mouse data from Mathis et al, 2018 Nature Neuroscience.

This demo notebook mostly shows the most simple code to train and evaluate your model, but many of the functions have additional features, so please check out the overview & the protocol paper!

This notebook illustrates how to use the cloud to:

  • load demo data

  • create a training set

  • train a network

  • evaluate a network

  • analyze a novel video

Installation#

First, go to “Runtime” ->”change runtime type”->select “Python3”, and then select “GPU”#

# Clone the entire deeplabcut repo so we can use the demo data:
!git clone -l -s https://github.com/DeepLabCut/DeepLabCut.git cloned-DLC-repo
%cd cloned-DLC-repo
!ls
%cd /content/cloned-DLC-repo/examples/openfield-Pranav-2018-10-30
!ls
# Install the latest DeepLabCut version (this will take a few minutes to install all the dependencies!)
%cd /content/cloned-DLC-repo/
%pip install "."

PLEASE, click “restart runtime” from the output above before proceeding!#

import deeplabcut
# Create a path variable that links to the config file:
path_config_file = '/content/cloned-DLC-repo/examples/openfield-Pranav-2018-10-30/config.yaml'

# Loading example data set:
deeplabcut.load_demo_data(path_config_file)

# Automatically update some hyperparameters for training, 
# here rotations to +/- 180 degrees. This can be helpful for optimizing performance. 
# see Primer -- Mathis et al. Neuron 2020
from deeplabcut.core.config import read_config_as_dict
import deeplabcut.pose_estimation_pytorch as dlc_torch

loader = dlc_torch.DLCLoader(
    config=path_config_file,  
    trainset_index=0,
    shuffle=1,
)

# Get the pytorch config path 
pytorch_config_path = loader.model_folder / "pytorch_config.yaml"

model_cfg = read_config_as_dict(pytorch_config_path)
model_cfg['data']["train"]["affine"]["rotation"]=180

# Save the modified config
dlc_torch.config.write_config(pytorch_config_path,model_cfg)

Start training:#

This function trains the network for a specific shuffle of the training dataset.

# Let's also change the display and save_epochs just in case Colab takes away
# the GPU... If that happens, you can reload from a saved point using the
# `snapshot_path` argument to `deeplabcut.train_network`:
#   deeplabcut.train_network(..., snapshot_path="/content/.../snapshot-050.pt")

# Typically, you want to train to ~200 epochs. We set the batch size to 8 to
# utilize the GPU's capabilities.

# More info and there are more things you can set:
#   https://deeplabcut.github.io/DeepLabCut/docs/standardDeepLabCut_UserGuide.html#g-train-the-network

deeplabcut.train_network(
    path_config_file,
    shuffle=1,
    save_epochs=5,
    epochs=200,
    batch_size=8,
)

# This will run until you stop it (CTRL+C), or hit "STOP" icon, or when it hits the end.

We recommend you run this for ~100 epochs, just as a demo. This should take around 15 minutes. Note, that when you hit “STOP” you will get a KeyboardInterrupt “error”! No worries! :)

A new snapshot is saved every save_epochs epochs. So once you hit 80 epochs, your latest snapshot in /content/cloned-DLC-repo/examples/openfield-Pranav-2018-10-30/dlc-models-pytorch/iteration-0/openfieldOct30-trainset95shuffle1/train should be snapshot-80.pt. The best snapshot evaluated during training is saved, and is named snapshot-best-XX.pt, where XX is the number of epochs the model was trained with.

Start evaluating:#

This function evaluates a trained model for a specific shuffle/shuffles at a particular state or all the states on the data set (images) and stores the results as .csv file in a subdirectory under evaluation-results

deeplabcut.evaluate_network(path_config_file, plotting=True)

# Here you want to see a low pixel error! Of course, it can only be as
# good as the labeler, so be sure your labels are good!

Check the images:

You can go look in the newly created "evalutaion-results-pytorch" folder at the images. At around 100 epochs, the error is ~3 pixels (but this can vary on how your demo data was split for training).

Start Analyzing videos:#

This function analyzes the new video. The user can choose the best model from the evaluation results and specify the correct snapshot index for the variable snapshotindex in the config.yaml file. Otherwise, by default the most recent snapshot is used to analyse the video.

The results are stored in hd5 file in the same directory where the video resides.

On the demo data, this should take around ~ 90 seconds! (The demo frames are 640x480, which should run around 25 FPS on the google-provided T4 GPU)

# Enter the list of videos to analyze.
videofile_path = ["/content/cloned-DLC-repo/examples/openfield-Pranav-2018-10-30/videos/m3v1mp4.mp4"]
deeplabcut.analyze_videos(path_config_file, videofile_path, videotype=".mp4")

Create labeled video:#

This function is for visualiztion purpose and can be used to create a video in .mp4 format with labels predicted by the network. This video is saved in the same directory where the original video resides. This should run around 215 FPS on the demo video!

deeplabcut.create_labeled_video(path_config_file, videofile_path)

Plot the trajectories of the analyzed videos:#

This function plots the trajectories of all the body parts across the entire video. Each body part is identified by a unique color.

deeplabcut.plot_trajectories(path_config_file, videofile_path)