DeepLabCut for your multi-animal projects!#
Some useful links:
This notebook illustrates how to, for multi-animal projects, use the cloud-based GPU to:
create a multi-animal training set
train a network
evaluate a network
analyze novel videos
assemble animals and tracklets
create quality check plots!
This notebook assumes you already have a DLC project folder with labeled data and you uploaded it to your own Google Drive.#
This notebook demonstrates the necessary steps to use DeepLabCut for your own project.
This shows the most simple code to do so, but many of the functions have additional features, so please check out the docs on GitHub. We also recommend checking out our preprint, which covers the science of maDLC
Lauer et al 2021: https://www.biorxiv.org/content/10.1101/2021.04.30.442096v1
First, go to “Runtime” ->”change runtime type”->select “Python3”, and then select “GPU”#
As the COLAB environments were updated to CUDA 12.X and Python 3.11, we need to install DeepLabCut and TensorFlow in a distinct way to get TensorFlow to connect to the GPU.
# this will take a couple of minutes to install all the dependencies!
!pip install --pre deeplabcut
(Be sure to click “RESTART RUNTIME” if it is displayed above before moving on !) You will see this button at the output of the cells above ^.
import deeplabcut
Link your Google Drive (with your labeled data):#
This code assumes you locally installed DeepLabCut, created a project, extracted and labeled frames. Be sure to “check Labels” to confirm you are happy with your data. As, these frames are the only thing that is used to train your network. 💪 You can find all the docs to do this here: deeplabcut.github.io/DeepLabCut
Next, place your DLC project folder into you Google Drive- i.e., copy the folder named “Project-YourName-TheDate” into Google Drive.
Then, click run on the cell below to link this notebook to your Google Drive:
# Now, let's link to your GoogleDrive. Run this cell and follow the authorization instructions:
# (We recommend putting a copy of the github repo in your google drive if you are using the demo "examples")
from google.colab import drive
drive.mount("/content/drive")
Next, edit the few items below, and click run:#
YOU WILL NEED TO EDIT THE PROJECT PATH in the config.yaml
file TO BE SET TO YOUR GOOGLE DRIVE LINK! Typically, this will be: /content/drive/My Drive/yourProjectFolderName
# PLEASE EDIT THIS:
project_folder_name = "MontBlanc-Daniel-2019-12-16"
video_type = "mp4" #, mp4, MOV, or avi, whatever you uploaded!
# No need to edit this, we are going to assume you put videos you want to analyze
# in the "videos" folder, but if this is NOT true, edit below:
videofile_path = [f"/content/drive/My Drive/{project_folder_name}/videos/"]
print(videofile_path)
# The prediction files and labeled videos will be saved in this `labeled-videos` folder
# in your project folder; if you want them elsewhere, you can edit this;
# if you want the output files in the same folder as the videos, set this to an empty string.
destfolder = f"/content/drive/My Drive/{project_folder_name}/labeled-videos"
#No need to edit this, as you set it when you passed the ProjectFolderName (above):
path_config_file = f"/content/drive/My Drive/{project_folder_name}/config.yaml"
print(path_config_file)
# This creates a path variable that links to your Google Drive project
Create a multi-animal training dataset:#
more info can be found in the docs
please check the text below, edit if needed, and then click run (this can take some time):
# OPTIONAL LEARNING: did you know you can check what each function does by running with a ?
deeplabcut.create_multianimaltraining_dataset?
# ATTENTION:
# Which shuffle do you want to create and train?
shuffle = 1 # Edit if needed; 1 is the default.
deeplabcut.create_multianimaltraining_dataset(
path_config_file,
Shuffles=[shuffle],
net_type="dlcrnet_ms5",
engine=deeplabcut.Engine.PYTORCH,
)
Start training:#
This function trains the network for a specific shuffle of the training dataset. More info can be found in the docs.
# Let's also change the display and save_epochs just in case Colab takes away
# the GPU... If that happens, you can reload from a saved point using the
# `snapshot_path` argument to `deeplabcut.train_network`:
# deeplabcut.train_network(..., snapshot_path="/content/.../snapshot-050.pt")
# Typically, you want to train to ~200 epochs. We set the batch size to 8 to
# utilize the GPU's capabilities.
# More info and there are more things you can set:
# https://deeplabcut.github.io/DeepLabCut/docs/standardDeepLabCut_UserGuide.html#g-train-the-network
deeplabcut.train_network(
path_config_file,
shuffle=shuffle,
save_epochs=5,
epochs=200,
batch_size=8,
)
# This will run until you stop it (CTRL+C), or hit "STOP" icon, or when it hits the end.
Note, that when you hit “STOP” you will get a KeyboardInterrupt
“error”! No worries! :)
Start evaluating:#
First, we evaluate the pose estimation performance.
This function evaluates a trained model for a specific shuffle/shuffles at a particular state or all the states on the data set (images) and stores the results as .5 and .csv file in a subdirectory under evaluation-results-pytorch
If the scoremaps do not look accurate, don’t proceed to tracklet assembly; please consider (1) adding more data, (2) adding more bodyparts!
More info can be found in the docs
Here is an example of what you’d aim to see before proceeding:
# Let's evaluate first:
deeplabcut.evaluate_network(path_config_file, Shuffles=[shuffle], plotting=True)
# plot a few scoremaps:
deeplabcut.extract_save_all_maps(path_config_file, shuffle=shuffle, Indices=[0, 1, 2, 3])
IF these images, numbers, and maps do not look good, do not proceed. You should increase the diversity and number of frames you label, and re-create a training dataset and re-train!
Start Analyzing videos:#
This function analyzes the new video. The user can choose the best model from the evaluation results and specify the correct snapshot index for the variable snapshotindex in the config.yaml file. Otherwise, by default the most recent snapshot is used to analyse the video.
The results are stored in a pickle file in the same directory where the video resides.
print("Start Analyzing my video(s)!")
#EDIT OPTION: which video(s) do you want to analyze? You can pass a path or a folder:
# currently, if you run "as is" it assumes you have a video in the DLC project video folder!
deeplabcut.analyze_videos(
path_config_file,
videofile_path,
shuffle=shuffle,
videotype=video_type,
auto_track=False,
destfolder=destfolder,
)
Optional: Now you have the option to check the raw detections before animals are tracked. To do so, pass a video path:
##### PROTIP: #####
## look at the output video; if the pose estimation (i.e. key points)
## don't look good, don't proceed with tracking - add more data to your training set and re-train!
# EDIT: let's check a specific video (PLEASE EDIT VIDEO PATH):
specific_videofile = "/content/drive/MyDrive/DeepLabCut_maDLC_DemoData/MontBlanc-Daniel-2019-12-16/videos/short.mov"
# Don't edit:
deeplabcut.create_video_with_all_detections(
path_config_file, [specific_videofile], shuffle=shuffle, destfolder=destfolder,
)
If the resulting video (ends in full.mp4) is not good, we highly recommend adding more data and training again. See here, in the docs.
Next, we will assemble animals using our data-driven optimal graph method:#
During video analysis, animals are assembled using the optimal graph, which matches the “data-driven” method from our paper (Figure adapted from Lauer et al. 2021)
The optimal graph is computed when evaluate_network
- so make sure you don’t skip that step!
Note: you can set the number of animals you expect to see, so check, edit, then click run:
#Check and edit:
num_animals = 4 # How many animals do you expect to find?
track_type= "box" # box, skeleton, ellipse
#-- ellipse is recommended, unless you have a single-point MA project, then use BOX!
# Optional:
# imagine you tracked a point that is not useful for assembly,
# like a tail tip that is far from the body, consider dropping it for this step (it's still used later)!
# To drop it, uncomment the next line TWO lines and add your parts(s):
# bodypart= 'Tail_end'
# deeplabcut.convert_detections2tracklets(path_config_file, videofile_path, videotype=VideoType, shuffle=shuffle, overwrite=True, ignore_bodyparts=[bodypart])
# OR don't drop, just click RUN:
deeplabcut.convert_detections2tracklets(
path_config_file,
videofile_path,
videotype=video_type,
shuffle=shuffle,
track_method=track_type,
destfolder=destfolder,
overwrite=True,
)
deeplabcut.stitch_tracklets(
path_config_file,
videofile_path,
shuffle=shuffle,
track_method=track_type,
n_tracks=num_animals,
destfolder=destfolder,
)
Now let’s filter the data to remove any small jitter:
deeplabcut.filterpredictions(
path_config_file,
videofile_path,
shuffle=shuffle,
videotype=video_type,
track_method=track_type,
destfolder=destfolder,
)
Create plots of your trajectories:#
deeplabcut.plot_trajectories(
path_config_file,
videofile_path,
videotype=video_type,
shuffle=shuffle,
track_method=track_type,
destfolder=destfolder,
)
Now you can look at the plot-poses file and check the “plot-likelihood.png” might want to change the “p-cutoff” in the config.yaml file so that you have only high confidnece points plotted in the video. i.e. ~0.8 or 0.9. The current default is 0.4.
Create labeled video:#
This function is for visualiztion purpose and can be used to create a video in .mp4 format with labels predicted by the network. This video is saved in the same directory where the original video resides.
deeplabcut.create_labeled_video(
path_config_file,
videofile_path,
shuffle=shuffle,
color_by="individual",
videotype=video_type,
save_frames=False,
filtered=True,
track_method=track_type,
destfolder=destfolder,
)