DeepLabCut User Guide (for single animal projects)#
This document covers single/standard DeepLabCut use. If you have a complicated multi-animal scenario (i.e., they look the same), then please see our maDLC user guide.
To get started, you can use the GUI, or the terminal. See below.
DeepLabCut Project Manager GUI (recommended for beginners)#
GUI:
To begin, navigate to Aanaconda Prompt Terminal and right-click to “open as admin “(Windows), or simply launch “Terminal” (unix/MacOS) on your computer. We assume you have DeepLabCut installed (if not, see Install docs!). Next, launch your conda env (i.e., for example conda activate DEEPLABCUT
). Then, simply run python -m deeplabcut
. The below functions are available to you in an easy-to-use graphical user interface. While most functionality is available, advanced users might want the additional flexibility that command line interface offers. Read more below.
Hint
🚨 If you use Windows, please always open the terminal with administrator privileges! Right click, and “run as administrator”.
As a reminder, the core functions are described in our Nature Protocols paper (published at the time of 2.0.6). Additional functions and features are continually added to the package. Thus, we recommend you read over the protocol and then please look at the following documentation and the doctrings. Thanks for using DeepLabCut!
DeepLabCut in the Terminal/Command line interface:#
To begin, navigate to Aanaconda Prompt Terminal and right-click to “open as admin “(Windows), or simply launch “Terminal” (unix/MacOS) on your computer. We assume you have DeepLabCut installed (if not, see Install docs!). Next, launch your conda env (i.e., for example conda activate DEEPLABCUT
) and then type ipython
. Then type import deeplabcut
.
Hint
🚨 If you use Windows, please always open the terminal with administrator privileges! Right click, and “run as administrator”.
(A) Create a New Project#
The function create_new_project creates a new project directory, required subdirectories, and a basic project configuration file. Each project is identified by the name of the project (e.g. Reaching), name of the experimenter (e.g. YourName), as well as the date at creation.
Thus, this function requires the user to input the name of the project, the name of the experimenter, and the full path of the videos that are (initially) used to create the training dataset.
Optional arguments specify the working directory, where the project directory will be created, and if the user wants to copy the videos (to the project directory). If the optional argument working_directory is unspecified, the project directory is created in the current working directory, and if copy_videos is unspecified symbolic links for the videos are created in the videos directory. Each symbolic link creates a reference to a video and thus eliminates the need to copy the entire video to the video directory (if the videos remain at the original location).
deeplabcut.create_new_project('Name of the project', 'Name of the experimenter', ['Full path of video 1', 'Full path of video2', 'Full path of video3'], working_directory='Full path of the working directory', copy_videos=True/False, multianimal=True/False)
Important path formatting note
Windows users, you must input paths as: r'C:\Users\computername\Videos\reachingvideo1.avi'
or
'C:\\Users\\computername\\Videos\\reachingvideo1.avi'
(TIP: you can also place config_path
in front of deeplabcut.create_new_project
to create a variable that holds the path to the config.yaml file, i.e. config_path=deeplabcut.create_new_project(...)
)
This set of arguments will create a project directory with the name Name of the project+name of the experimenter+date of creation of the project in the Working directory and creates the symbolic links to videos in the videos directory. The project directory will have subdirectories: dlc-models, labeled-data, training-datasets, and videos. All the outputs generated during the course of a project will be stored in one of these subdirectories, thus allowing each project to be curated in separation from other projects. The purpose of the subdirectories is as follows:
dlc-models: This directory contains the subdirectories test and train, each of which holds the meta information with regard to the parameters of the feature detectors in configuration files. The configuration files are YAML files, a common human-readable data serialization language. These files can be opened and edited with standard text editors. The subdirectory train will store checkpoints (called snapshots in TensorFlow) during training of the model. These snapshots allow the user to reload the trained model without re-training it, or to pick-up training from a particular saved checkpoint, in case the training was interrupted.
labeled-data: This directory will store the frames used to create the training dataset. Frames from different videos are stored in separate subdirectories. Each frame has a filename related to the temporal index within the corresponding video, which allows the user to trace every frame back to its origin.
training-datasets: This directory will contain the training dataset used to train the network and metadata, which contains information about how the training dataset was created.
videos: Directory of video links or videos. When copy_videos is set to False
, this directory contains symbolic links to the videos. If it is set to True
then the videos will be copied to this directory. The default is False
. Additionally, if the user wants to add new videos to the project at any stage, the function add_new_videos can be used. This will update the list of videos in the project’s configuration file.
deeplabcut.add_new_videos('Full path of the project configuration file*', ['full path of video 4', 'full path of video 5'], copy_videos=True/False)
*Please note, Full path of the project configuration file will be referenced as config_path
throughout this protocol.
The project directory also contains the main configuration file called config.yaml. The config.yaml file contains many important parameters of the project. A complete list of parameters including their description can be found in Box1.
The create_new_project
step writes the following parameters to the configuration file: Task, scorer, date, project_path as well as a list of videos video_sets. The first three parameters should not be changed. The list of videos can be changed by adding new videos or manually removing videos.
API Docs#
Click the button to see API Docs
- deeplabcut.create_project.new.create_new_project(project, experimenter, videos, working_directory=None, copy_videos=False, videotype='', multianimal=False)#
Create the necessary folders and files for a new project.
Creating a new project involves creating the project directory, sub-directories and a basic configuration file. The configuration file is loaded with the default values. Change its parameters to your projects need.
- Parameters:
- projectstring
The name of the project.
- experimenterstring
The name of the experimenter.
- videoslist[str]
A list of strings representing the full paths of the videos to include in the project. If the strings represent a directory instead of a file, all videos of
videotype
will be imported.- working_directorystring, optional
The directory where the project will be created. The default is the
current working directory
.- copy_videosbool, optional, Default: False.
If True, the videos are copied to the
videos
directory. If False, symlinks of the videos will be created in theproject/videos
directory; in the event of a failure to create symbolic links, videos will be moved instead.- multianimal: bool, optional. Default: False.
For creating a multi-animal project (introduced in DLC 2.2)
- Returns:
- str
Path to the new project configuration file.
Examples
Linux/MacOS:
>>> deeplabcut.create_new_project( project='reaching-task', experimenter='Linus', videos=[ '/data/videos/mouse1.avi', '/data/videos/mouse2.avi', '/data/videos/mouse3.avi' ], working_directory='/analysis/project/', ) >>> deeplabcut.create_new_project( project='reaching-task', experimenter='Linus', videos=['/data/videos'], videotype='.mp4', )
Windows:
>>> deeplabcut.create_new_project( 'reaching-task', 'Bill', [r'C:\yourusername\rig-95\Videos\reachingvideo1.avi'], copy_videos=True, )
Users must format paths with either: r’C:OR ‘C:\ <- i.e. a double backslash )
(B) Configure the Project#
Next, open the config.yaml file, which was created during create_new_project. You can edit this file in any text editor. Familiarize yourself with the meaning of the parameters (Box 1). You can edit various parameters, in particular you must add the list of bodyparts (or points of interest) that you want to track. You can also set the colormap here that is used for all downstream steps (can also be edited at anytime), like labeling GUIs, videos, etc. Here any matplotlib colormaps will do! Please DO NOT have spaces in the names of bodyparts.
bodyparts: are the bodyparts of each individual (in the above list).
(C) Data Selection (extract frames)#
CRITICAL: A good training dataset should consist of a sufficient number of frames that capture the breadth of the behavior. This ideally implies to select the frames from different (behavioral) sessions, different lighting and different animals, if those vary substantially (to train an invariant, robust feature detector). Thus for creating a robust network that you can reuse in the laboratory, a good training dataset should reflect the diversity of the behavior with respect to postures, luminance conditions, background conditions, animal identities,etc. of the data that will be analyzed. For the simple lab behaviors comprising mouse reaching, open-field behavior and fly behavior, 100−200 frames gave good results Mathis et al, 2018. However, depending on the required accuracy, the nature of behavior, the video quality (e.g. motion blur, bad lighting) and the context, more or less frames might be necessary to create a good network. Ultimately, in order to scale up the analysis to large collections of videos with perhaps unexpected conditions, one can also refine the data set in an adaptive way (see refinement below).
The function extract_frames
extracts frames from all the videos in the project configuration file in order to create a training dataset. The extracted frames from all the videos are stored in a separate subdirectory named after the video file’s name under the ‘labeled-data’. This function also has various parameters that might be useful based on the user’s need.
deeplabcut.extract_frames(config_path, mode='automatic/manual', algo='uniform/kmeans', userfeedback=False, crop=True/False)
CRITICAL POINT: It is advisable to keep the frame size small, as large frames increase the training and inference time. The cropping parameters for each video can be provided in the config.yaml file (and see below). When running the function extract_frames, if the parameter crop=True, then you will be asked to draw a box within the GUI (and this is written to the config.yaml file).
userfeedback
allows the user to check which videos they wish to extract frames from. In this way, if you added more videos to the config.yaml file it does not, by default, extract frames (again) from every video. If you wish to disable this question, set userfeedback = True
.
The provided function either selects frames from the videos that are randomly sampled from a uniform distribution (uniform), by clustering based on visual appearance (k-means), or by manual selection. Random selection of frames works best for behaviors where the postures vary across the whole video. However, some behaviors might be sparse, as in the case of reaching where the reach and pull are very fast and the mouse is not moving much between trials (thus, we have the default set to True, as this is best for most use-cases we encounter). In such a case, the function that allows selecting frames based on k-means derived quantization would be useful. If the user chooses to use k-means as a method to cluster the frames, then this function downsamples the video and clusters the frames using k-means, where each frame is treated as a vector. Frames from different clusters are then selected. This procedure makes sure that the frames look different. However, on large and long videos, this code is slow due to computational complexity.
CRITICAL POINT: It is advisable to extract frames from a period of the video that contains interesting behaviors, and not extract the frames across the whole video. This can be achieved by using the start and stop parameters in the config.yaml file. Also, the user can change the number of frames to extract from each video using the numframes2extract in the config.yaml file.
However, picking frames is highly dependent on the data and the behavior being studied. Therefore, it is hard to provide all purpose code that extracts frames to create a good training dataset for every behavior and animal. If the user feels specific frames are lacking, they can extract hand selected frames of interest using the interactive GUI provided along with the toolbox. This can be launched by using:
deeplabcut.extract_frames(config_path, 'manual')
The user can use the Load Video button to load one of the videos in the project configuration file, use the scroll bar to navigate across the video and Grab a Frame (or a range of frames, as of version 2.0.5) to extract the frame(s). The user can also look at the extracted frames and e.g. delete frames (from the directory) that are too similar before reloading the set and then manually annotating them.
API Docs#
Click the button to see API Docs
- deeplabcut.generate_training_dataset.frame_extraction.extract_frames(config, mode='automatic', algo='kmeans', crop=False, userfeedback=True, cluster_step=1, cluster_resizewidth=30, cluster_color=False, opencv=True, slider_width=25, config3d=None, extracted_cam=0, videos_list=None)#
Extracts frames from the project videos.
Frames will be extracted from videos listed in the config.yaml file.
The frames are selected from the videos in a randomly and temporally uniformly distributed way (
uniform
), by clustering based on visual appearance (k-means
), or by manual selection.After frames have been extracted from all videos from one camera, matched frames from other cameras can be extracted using
mode = "match"
. This is necessary if you plan to use epipolar lines to improve labeling across multiple camera angles. It will overwrite previously extracted images from the second camera angle if necessary.Please refer to the user guide for more details on methods and parameters https://www.nature.com/articles/s41596-019-0176-0 or the preprint: https://www.biorxiv.org/content/biorxiv/early/2018/11/24/476531.full.pdf
- Parameters:
- configstring
Full path of the config.yaml file as a string.
- modestring. Either
"automatic"
,"manual"
or"match"
. String containing the mode of extraction. It must be either
"automatic"
or"manual"
to extract the initial set of frames. It can also be"match"
to match frames between the cameras in preparation for the use of epipolar line during labeling; namely, extract from camera_1 first, then run this to extract the matched frames in camera_2.WARNING: if you use
"match"
, and you previously extracted and labeled frames from the second camera, this will overwrite your data. This will require you to delete thecollectdata(.h5/.csv)
files before labeling. Use with caution!- algostring, Either
"kmeans"
or"uniform"
, Default: “kmeans”. String specifying the algorithm to use for selecting the frames. Currently, deeplabcut supports either
kmeans
oruniform
based selection. This flag is only required forautomatic
mode and the default iskmeans
. For"uniform"
, frames are picked in temporally uniform way,"kmeans"
performs clustering on downsampled frames (see user guide for details).NOTE: Color information is discarded for
"kmeans"
, thus e.g. for camouflaged octopus clustering one might want to change this.- cropbool or str, optional
If
True
, video frames are cropped according to the corresponding coordinates stored in the project configuration file. Alternatively, if cropping coordinates are not known yet, crop=``”GUI”`` triggers a user interface where the cropping area can be manually drawn and saved.- userfeedback: bool, optional
If this is set to
False
during"automatic"
mode then frames for all videos are extracted. The user can set this to"True"
, which will result in a dialog, where the user is asked for each video if (additional/any) frames from this video should be extracted. Use this, e.g. if you have already labeled some folders and want to extract data for new videos.- cluster_resizewidth: int, default: 30
For
"k-means"
one can change the width to which the images are downsampled (aspect ratio is fixed).- cluster_step: int, default: 1
By default each frame is used for clustering, but for long videos one could only use every nth frame (set using this parameter). This saves memory before clustering can start, however, reading the individual frames takes longer due to the skipping.
- cluster_color: bool, default: False
If
"False"
then each downsampled image is treated as a grayscale vector (discarding color information). If"True"
, then the color channels are considered. This increases the computational complexity.- opencv: bool, default: True
Uses openCV for loading & extractiong (otherwise moviepy (legacy)).
- slider_width: int, default: 25
Width of the video frames slider, in percent of window.
- config3d: string, optional
Path to the project configuration file in the 3D project. This will be used to match frames extracted from all cameras present in the field ‘camera_names’ to the frames extracted from the camera given by the parameter ‘extracted_cam’.
- extracted_cam: int, default: 0
The index of the camera that already has extracted frames. This will match frame numbers to extract for all other cameras. This parameter is necessary if you wish to use epipolar lines in the labeling toolbox. Only use if
mode='match'
andconfig3d
is provided.- videos_list: list[str], Default: None
A list of the string containing full paths to videos to extract frames for. If this is left as
None
all videos specified in the config file will have frames extracted. Otherwise one can select a subset by passing those paths.
- Returns:
- None
Notes
Use the function
add_new_videos
at any stage of the project to add new videos to the config file and extract their frames.The following parameters for automatic extraction are used from the config file
numframes2pick
start
andstop
While selecting the frames manually, you do not need to specify the
crop
parameter in the command. Rather, you will get a prompt in the graphic user interface to choose if you need to crop or not.Examples
To extract frames automatically with ‘kmeans’ and then crop the frames
>>> deeplabcut.extract_frames( config='/analysis/project/reaching-task/config.yaml', mode='automatic', algo='kmeans', crop=True, )
To extract frames automatically with ‘kmeans’ and then defining the cropping area using a GUI
>>> deeplabcut.extract_frames( '/analysis/project/reaching-task/config.yaml', 'automatic', 'kmeans', 'GUI', )
To consider the color information when extracting frames automatically with ‘kmeans’
>>> deeplabcut.extract_frames( '/analysis/project/reaching-task/config.yaml', 'automatic', 'kmeans', cluster_color=True, )
To extract frames automatically with ‘uniform’ and then crop the frames
>>> deeplabcut.extract_frames( '/analysis/project/reaching-task/config.yaml', 'automatic', 'uniform', crop=True, )
To extract frames manually
>>> deeplabcut.extract_frames( '/analysis/project/reaching-task/config.yaml', 'manual' )
To extract frames manually, with a 60% wide frames slider
>>> deeplabcut.extract_frames( '/analysis/project/reaching-task/config.yaml', 'manual', slider_width=60, )
To extract frames from a second camera that match the frames extracted from the first
>>> deeplabcut.extract_frames( '/analysis/project/reaching-task/config.yaml', mode='match', extracted_cam=0, )
(D) Label Frames#
The toolbox provides a function label_frames which helps the user to easily label all the extracted frames using an interactive graphical user interface (GUI). The user should have already named the body parts to label (points of interest) in the project’s configuration file by providing a list. The following command invokes the labeling toolbox.
deeplabcut.label_frames(config_path)
The user needs to use the Load Frames button to select the directory which stores the extracted frames from one of
the videos. Subsequently, the user can use one of the radio buttons (top right) to select a body part to label. RIGHT click to add the label. Left click to drag the label, if needed. If you label a part accidentally, you can use the middle button on your mouse to delete! If you cannot see a body part in the frame, skip over the label! Please see the HELP
button for more user instructions. This auto-advances once you labeled the first body part. You can also advance to the next frame by clicking on the RIGHT arrow on your keyboard (and go to a previous frame with LEFT arrow).
Each label will be plotted as a dot in a unique color.
The user is free to move around the body part and once satisfied with its position, can select another radio button (in the top right) to switch to the respective body part (it otherwise auto-advances). The user can skip a body part if it is not visible. Once all the visible body parts are labeled, then the user can use ‘Next Frame’ to load the following frame. The user needs to save the labels after all the frames from one of the videos are labeled by clicking the save button at the bottom right. Saving the labels will create a labeled dataset for each video in a hierarchical data file format (HDF) in the subdirectory corresponding to the particular video in labeled-data. You can save at any intermediate step (even without closing the GUI, just hit save) and you return to labeling a dataset by reloading it!
CRITICAL POINT: It is advisable to consistently label similar spots (e.g., on a wrist that is very large, try to label the same location). In general, invisible or occluded points should not be labeled by the user. They can simply be skipped by not applying the label anywhere on the frame.
OPTIONAL: In the event of adding more labels to the existing labeled dataset, the user need to append the new labels to the bodyparts in the config.yaml file. Thereafter, the user can call the function label_frames. As of 2.0.5+: then a box will pop up and ask the user if they wish to display all parts, or only add in the new labels. Saving the labels after all the images are labelled will append the new labels to the existing labeled dataset.
HOT KEYS IN THE Labeling GUI (also see “help” in GUI):
Ctrl + C: Copy labels from previous frame.
Keyboard arrows: advance frames.
Delete key: delete label.
(E) Check Annotated Frames#
OPTIONAL: Checking if the labels were created and stored correctly is beneficial for training, since labeling is one of the most critical parts for creating the training dataset. The DeepLabCut toolbox provides a function ‘check_labels’ to do so. It is used as follows:
deeplabcut.check_labels(config_path, visualizeindividuals=True/False)
For each video directory in labeled-data this function creates a subdirectory with labeled as a suffix. Those directories contain the frames plotted with the annotated body parts. The user can double check if the body parts are labeled correctly. If they are not correct, the user can reload the frames (i.e. deeplabcut.label_frames
), move them around, and click save again.
API Docs#
Click the button to see API Docs
- deeplabcut.generate_training_dataset.trainingsetmanipulation.check_labels(config, Labels=['+', '.', 'x'], scale=1, dpi=100, draw_skeleton=True, visualizeindividuals=True)#
Check the labeled frames.
Double check if the labels were at the correct locations and stored in the proper file format.
This creates a new subdirectory for each video under the ‘labeled-data’ and all the frames are plotted with the labels.
Make sure that these labels are fine.
- Parameters:
- configstring
Full path of the config.yaml file as a string.
- Labels: list, default=’+’
List of at least 3 matplotlib markers. The first one will be used to indicate the human ground truth location (Default: +)
- scalefloat, default=1
Change the relative size of the output images.
- dpiint, optional, default=100
Output resolution in dpi.
- draw_skeleton: bool, default=True
Plot skeleton overlaid over body parts.
- visualizeindividuals: bool, default: True.
For a multianimal project, if True, the different individuals have different colors (and all bodyparts the same). If False, the colors change over bodyparts rather than individuals.
- Returns:
- None
Examples
>>> deeplabcut.check_labels('/analysis/project/reaching-task/config.yaml')
(F) Create Training Dataset(s) and selection of your neural network#
CRITICAL POINT: Only run this step where you are going to train the network. If you label on your laptop but move your project folder to Google Colab or AWS, lab server, etc, then run the step below on that platform! If you labeled on a Windows machine but train on Linux, this is fine as of 2.0.4 onwards it will be done automatically (it saves file sets as both Linux and Windows for you).
If you move your project folder, you must only change the
project_path
(which is done automatically) in the main config.yaml file - that’s it - no need to change the video paths, etc! Your project is fully portable.Be aware you select your neural network backbone at this stage. As of DLC3+ we support PyTorch (and TensorFlow, but this will be phased out).
OVERVIEW: This function combines the labeled datasets from all the videos and splits them to create train and test datasets. The training data will be used to train the network, while the test data set will be used for evaluating the network. The function create_training_dataset performs those steps.
deeplabcut.create_training_dataset(config_path)
OPTIONAL: If the user wishes to benchmark the performance of the DeepLabCut, they can create multiple training datasets by specifying an integer value to the
num_shuffles
; see the docstring for more details.
within dlc-models called test
and train
, and these each have a configuration file called pose_cfg.yaml.
Specifically, the user can edit the pose_cfg.yaml within the train subdirectory before starting the training. These configuration files contain meta information with regard to the parameters of the feature detectors. Key parameters are listed in Box 2.
CRITICAL POINT: At this step, for create_training_dataset you select the network you want to use, and any additional data augmentation (beyond our defaults). You can set net_type
and augmenter_type
when you call the function.
Networks: ImageNet pre-trained networks OR SuperAnimal pre-trained networks weights will be downloaded, as you select. You can decide to do transfer-learning (recommended) or “fine-tune” both the backbone and the decoder head. We suggest seeing our dedicated documentation on models for more information.
Hint
🚨 If they do not download (you will see this downloading in the terminal), then you may not have permission to do so - be sure to open your terminal “as an admin” (This is only something we have seen with some Windows users - see the docs for more help!).
DATA AUGMENTATION: At this stage you can also decide what type of augmentation to use. The default loaders work well for most all tasks (as shown on www.deeplabcut.org), but there are many options, more data augmentation, intermediate supervision, etc. Please look at the pose_cfg.yaml file for a full list of parameters you might want to change before running this step. There are several data loaders that can be used. For example, you can use the default loader (introduced and described in the Nature Protocols paper), TensorPack for data augmentation (currently this is easiest on Linux only), or imgaug. We recommend imgaug
. You can set this by passing:deeplabcut.create_training_dataset(config_path, augmenter_type='imgaug')
For TensorFlow Models: the differences of the loaders are as follows:
imgaug
: a lot of augmentation possibilities, efficient code for target map creation & batch sizes >1 supported. You can set the parameters such as thebatch_size
in thepose_cfg.yaml
file for the model you are training. This is the recommended DEFAULT!crop_scale
: our standard DLC 2.0 introduced in Nature Protocols variant (scaling, auto-crop augmentation)tensorpack
: a lot of augmentation possibilities, multi CPU support for fast processing, target maps are created less efficiently than in imgaug, does not allow batch size>1deterministic
: only useful for testing, freezes numpy seed; otherwise like default.
For PyTorch Models:
#TODO: more information coming soon; in the meantime see the docstrings!
MODEL COMPARISON: You can also test several models by creating the same test/train split for different networks. You can easily do this in the Project Manager GUI, which also lets you compare PyTorch and TensorFlow models.
Please also consult the following page on selecting models
See Box 2 on how to specify which network is loaded for training (including your own network, etc):
API Docs for deeplabcut.create_training_dataset#
Click the button to see API Docs
- deeplabcut.generate_training_dataset.trainingsetmanipulation.create_training_dataset(config, num_shuffles=1, Shuffles=None, windows2linux=False, userfeedback=False, trainIndices=None, testIndices=None, net_type=None, augmenter_type=None, posecfg_template=None, superanimal_name='')#
Creates a training dataset.
Labels from all the extracted frames are merged into a single .h5 file. Only the videos included in the config file are used to create this dataset.
- Parameters:
- configstring
Full path of the
config.yaml
file as a string.- num_shufflesint, optional, default=1
Number of shuffles of training dataset to create, i.e.
[1,2,3]
fornum_shuffles=3
.- Shuffles: list[int], optional
Alternatively the user can also give a list of shuffles.
- userfeedback: bool, optional, default=False
If
False
, all requested train/test splits are created (no matter if they already exist). If you want to assure that previous splits etc. are not overwritten, set this toTrue
and you will be asked for each split.- trainIndices: list of lists, optional, default=None
List of one or multiple lists containing train indexes. A list containing two lists of training indexes will produce two splits.
- testIndices: list of lists, optional, default=None
List of one or multiple lists containing test indexes.
- net_type: list, optional, default=None
Type of networks. Currently supported options are
resnet_50
resnet_101
resnet_152
mobilenet_v2_1.0
mobilenet_v2_0.75
mobilenet_v2_0.5
mobilenet_v2_0.35
efficientnet-b0
efficientnet-b1
efficientnet-b2
efficientnet-b3
efficientnet-b4
efficientnet-b5
efficientnet-b6
- augmenter_type: string, optional, default=None
Type of augmenter. Currently supported augmenters are
default
scalecrop
imgaug
tensorpack
deterministic
- posecfg_template: string, optional, default=None
Path to a
pose_cfg.yaml
file to use as a template for generating the new one for the current iteration. Useful if you would like to start with the same parameters a previous training iteration. None uses the defaultpose_cfg.yaml
.- superanimal_name: string, optional, default=””
Specify the superanimal name is transfer learning with superanimal is desired. This makes sure the pose config template uses superanimal configs as template
- Returns:
- list(tuple) or None
If training dataset was successfully created, a list of tuples is returned. The first two elements in each tuple represent the training fraction and the shuffle value. The last two elements in each tuple are arrays of integers representing the training and test indices.
Returns None if training dataset could not be created.
Notes
Use the function
add_new_videos
at any stage of the project to add more videos to the project.Examples
Linux/MacOS
>>> deeplabcut.create_training_dataset( '/analysis/project/reaching-task/config.yaml', num_shuffles=1, )
Windows
>>> deeplabcut.create_training_dataset( 'C:\Users\Ulf\looming-task\config.yaml', Shuffles=[3,17,5], )
API Docs for deeplabcut.create_training_model_comparison#
Click the button to see API Docs
- deeplabcut.generate_training_dataset.trainingsetmanipulation.create_training_model_comparison(config, trainindex=0, num_shuffles=1, net_types=['resnet_50'], augmenter_types=['imgaug'], userfeedback=False, windows2linux=False)#
Creates a training dataset to compare networks and augmentation types.
The datasets are created such that the shuffles have same training and testing indices. Therefore, this function is useful for benchmarking the performance of different network and augmentation types on the same training/testdata.
- Parameters:
- config: str
Full path of the config.yaml file.
- trainindex: int, optional, default=0
Either (in case uniform = True) indexes which element of TrainingFraction in the config file should be used (note it is a list!). Alternatively (uniform = False) indexes which folder is dropped, i.e. the first if trainindex=0, the second if trainindex=1, etc.
- num_shufflesint, optional, default=1
Number of shuffles of training dataset to create, i.e. [1,2,3] for num_shuffles=3.
- net_types: list[str], optional, default=[“resnet_50”]
Currently supported networks are
"resnet_50"
"resnet_101"
"resnet_152"
"mobilenet_v2_1.0"
"mobilenet_v2_0.75"
"mobilenet_v2_0.5"
"mobilenet_v2_0.35"
"efficientnet-b0"
"efficientnet-b1"
"efficientnet-b2"
"efficientnet-b3"
"efficientnet-b4"
"efficientnet-b5"
"efficientnet-b6"
- augmenter_types: list[str], optional, default=[“imgaug”]
Currently supported augmenters are
"default"
"imgaug"
"tensorpack"
"deterministic"
- userfeedback: bool, optional, default=False
If
False
, then all requested train/test splits are created, no matter if they already exist. If you want to assure that previous splits etc. are not overwritten, then set this to True and you will be asked for each split.- windows2linux
- ..deprecated::
Has no effect since 2.2.0.4 and will be removed in 2.2.1.
- Returns:
- shuffle_list: list
List of indices corresponding to the trainingsplits/models that were created.
Examples
On Linux/MacOS
>>> shuffle_list = deeplabcut.create_training_model_comparison( '/analysis/project/reaching-task/config.yaml', num_shuffles=1, net_types=['resnet_50','resnet_152'], augmenter_types=['tensorpack','deterministic'], )
On Windows
>>> shuffle_list = deeplabcut.create_training_model_comparison( 'C:\Users\Ulf\looming-task\config.yaml', num_shuffles=1, net_types=['resnet_50','resnet_152'], augmenter_types=['tensorpack','deterministic'], )
See
examples/testscript_openfielddata_augmentationcomparison.py
for an example of how to useshuffle_list
.
(G) Train The Network#
The function ‘train_network’ helps the user in training the network. It is used as follows:
deeplabcut.train_network(config_path)
The set of arguments in the function starts training the network for the dataset created for one specific shuffle. Note that you can change the loader (imgaug/default/etc) as well as other training parameters in the pose_cfg.yaml file of the model that you want to train (before you start training).
Example parameters that one can call:
deeplabcut.train_network(config_path, shuffle=1, trainingsetindex=0, gputouse=None, max_snapshots_to_keep=5, autotune=False, displayiters=100, saveiters=15000, maxiters=30000, allow_growth=True)
By default, the pretrained networks are not in the DeepLabCut toolbox (as they are around 100MB each), but they get downloaded before you train. However, if not previously downloaded, it will be downloaded and stored in a subdirectory pre-trained under the subdirectory models in Pose_Estimation_Tensorflow or Pose_Estimation_PyTorch. At user specified iterations during training checkpoints are stored in the subdirectory train under the respective iteration directory.
If the user wishes to restart the training at a specific checkpoint they can specify the full path of the checkpoint to
the variable init_weights
in the pose_cfg.yaml file under the train subdirectory (see Box 2).
CRITICAL POINT, For TensorFlow models: it is recommended to train the ResNets or MobileNets for thousands of iterations until the loss plateaus (typically around 500,000) if you use batch size 1. If you want to batch train, we recommend using Adam, see more here.
CRITICAL POINT, For PyTorch models: PyTorch uses “epochs” not iterations. Please see our dedicated documentation that explains how best to set the number of epochs here. When in doubt, stick to the default! A bonus, training time is much less!
maDeepLabCut CRITICAL POINT: For multi-animal projects we are using not only different and new output layers, but also new data augmentation, optimization, learning rates, and batch training defaults. Thus, please use a lower save_iters
and maxiters
. I.e., we suggest saving every 10K-15K iterations, and only training until 50K-100K iterations. We recommend you look closely at the loss to not overfit on your data. The bonus, training time is much less!
API Docs#
Click the button to see API Docs
- deeplabcut.pose_estimation_tensorflow.training.train_network(config, shuffle=1, trainingsetindex=0, max_snapshots_to_keep=5, displayiters=None, saveiters=None, maxiters=None, allow_growth=True, gputouse=None, autotune=False, keepdeconvweights=True, modelprefix='', superanimal_name='', superanimal_transfer_learning=False)#
Trains the network with the labels in the training dataset.
- Parameters:
- configstring
Full path of the config.yaml file as a string.
- shuffle: int, optional, default=1
Integer value specifying the shuffle index to select for training.
- trainingsetindex: int, optional, default=0
Integer specifying which TrainingsetFraction to use. Note that TrainingFraction is a list in config.yaml.
- max_snapshots_to_keep: int or None
Sets how many snapshots are kept, i.e. states of the trained network. Every saving iteration many times a snapshot is stored, however only the last
max_snapshots_to_keep
many are kept! If you change this to None, then all are kept. See: DeepLabCut/DeepLabCut#8- displayiters: optional, default=None
This variable is actually set in
pose_config.yaml
. However, you can overwrite it with this hack. Don’t use this regularly, just if you are too lazy to dig out thepose_config.yaml
file for the corresponding project. IfNone
, the value from there is used, otherwise it is overwritten!- saveiters: optional, default=None
This variable is actually set in
pose_config.yaml
. However, you can overwrite it with this hack. Don’t use this regularly, just if you are too lazy to dig out thepose_config.yaml
file for the corresponding project. IfNone
, the value from there is used, otherwise it is overwritten!- maxiters: optional, default=None
This variable is actually set in
pose_config.yaml
. However, you can overwrite it with this hack. Don’t use this regularly, just if you are too lazy to dig out thepose_config.yaml
file for the corresponding project. IfNone
, the value from there is used, otherwise it is overwritten!- allow_growth: bool, optional, default=True.
For some smaller GPUs the memory issues happen. If
True
, the memory allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. See issue: https://forum.image.sc/t/how-to-stop-running-out-of-vram/30551/2- gputouse: optional, default=None
Natural number indicating the number of your GPU (see number in nvidia-smi). If you do not have a GPU put None. See: https://nvidia.custhelp.com/app/answers/detail/a_id/3751/~/useful-nvidia-smi-queries
- autotune: bool, optional, default=False
Property of TensorFlow, somehow faster if
False
(as Eldar found out, see tensorflow/tensorflow#13317).- keepdeconvweights: bool, optional, default=True
Also restores the weights of the deconvolution layers (and the backbone) when training from a snapshot. Note that if you change the number of bodyparts, you need to set this to false for re-training.
- modelprefix: str, optional, default=””
Directory containing the deeplabcut models to use when evaluating the network. By default, the models are assumed to exist in the project folder.
- superanimal_name: str, optional, default =””
Specified if transfer learning with superanimal is desired
- superanimal_transfer_learning: bool, optional, default = False.
If set true, the training is transfer learning (new decoding layer). If set false,
- and superanimal_name is True, then the training is fine-tuning (reusing the decoding layer)
- Returns:
- None
Examples
To train the network for first shuffle of the training dataset
>>> deeplabcut.train_network('/analysis/project/reaching-task/config.yaml')
To train the network for second shuffle of the training dataset
>>> deeplabcut.train_network( '/analysis/project/reaching-task/config.yaml', shuffle=2, keepdeconvweights=True, )
(H) Evaluate the Trained Network#
It is important to evaluate the performance of the trained network. This performance is measured by computing the mean average Euclidean error (MAE; which is proportional to the average root mean square error) between the manual labels and the ones predicted by DeepLabCut. The MAE is saved as a comma separated file and displayed for all pairs and only likely pairs (>p-cutoff). This helps to exclude, for example, occluded body parts. One of the strengths of DeepLabCut is that due to the probabilistic output of the scoremap, it can, if sufficiently trained, also reliably report if a body part is visible in a given frame. (see discussions of finger tips in reaching and the Drosophila legs during 3D behavior in [Mathis et al, 2018]). The evaluation results are computed by typing:
deeplabcut.evaluate_network(config_path, Shuffles=[1], plotting=True)
Setting plotting
to true plots all the testing and training frames with the manual and predicted labels. The user
should visually check the labeled test (and training) images that are created in the ‘evaluation-results’ directory.
Ideally, DeepLabCut labeled unseen (test images) according to the user’s required accuracy, and the average train
and test errors are comparable (good generalization). What (numerically) comprises an acceptable MAE depends on
many factors (including the size of the tracked body parts, the labeling variability, etc.). Note that the test error can
also be larger than the training error due to human variability (in labeling, see Figure 2 in Mathis et al, Nature Neuroscience 2018).
Optional parameters:
Shuffles: list, optional -List of integers specifying the shuffle indices of the training dataset. The default is [1]
plotting: bool, optional -Plots the predictions on the train and test images. The default is `False`; if provided it must be either `True` or `False`
show_errors: bool, optional -Display train and test errors. The default is `True`
comparisonbodyparts: list of bodyparts, Default is all -The average error will be computed for those body parts only (Has to be a subset of the body parts).
gputouse: int, optional -Natural number indicating the number of your GPU (see number in nvidia-smi). If you do not have a GPU, put None. See: https://nvidia.custhelp.com/app/answers/detail/a_id/3751/~/useful-nvidia-smi-queries
The plots can be customized by editing the config.yaml file (i.e., the colormap, scale, marker size (dotsize), and
transparency of labels (alphavalue) can be modified). By default each body part is plotted in a different color
(governed by the colormap) and the plot labels indicate their source. Note that by default the human labels are
plotted as plus (‘+’), DeepLabCut’s predictions either as ‘.’ (for confident predictions with likelihood > p-cutoff) and
’x’ for (likelihood <= pcutoff
).
The evaluation results for each shuffle of the training dataset are stored in a unique subdirectory in a newly created directory ‘evaluation-results’ in the project directory. The user can visually inspect if the distance between the labeled and the predicted body parts are acceptable. In the event of benchmarking with different shuffles of same training dataset, the user can provide multiple shuffle indices to evaluate the corresponding network. Note that with multi-animal projects additional distance statistics aggregated over animals or bodyparts are also stored in that directory. This aims at providing a finer quantitative evaluation of multi-animal prediction performance before animal tracking. If the generalization is not sufficient, the user might want to:
• check if the labels were imported correctly; i.e., invisible points are not labeled and the points of interest are labeled accurately
• make sure that the loss has already converged
• consider labeling additional images and make another iteration of the training data set
OPTIONAL: You can also plot the scoremaps, locref layers, and PAFs:
deeplabcut.extract_save_all_maps(config_path, shuffle=shuffle, Indices=[0, 5])
you can drop “Indices” to run this on all training/testing images (this is slow!)
API Docs#
Click the button to see API Docs
- deeplabcut.pose_estimation_tensorflow.core.evaluate.evaluate_network(config, Shuffles=[1], trainingsetindex=0, plotting=False, show_errors=True, comparisonbodyparts='all', gputouse=None, rescale=False, modelprefix='', per_keypoint_evaluation: bool = False, snapshots_to_evaluate: List[str] | None = None)#
Evaluates the network.
Evaluates the network based on the saved models at different stages of the training network. The evaluation results are stored in the .h5 and .csv file under the subdirectory ‘evaluation_results’. Change the snapshotindex parameter in the config file to ‘all’ in order to evaluate all the saved models.
- Parameters:
- configstring
Full path of the config.yaml file.
- Shuffles: list, optional, default=[1]
List of integers specifying the shuffle indices of the training dataset.
- trainingsetindex: int or str, optional, default=0
Integer specifying which “TrainingsetFraction” to use. Note that “TrainingFraction” is a list in config.yaml. This variable can also be set to “all”.
- plotting: bool or str, optional, default=False
Plots the predictions on the train and test images. If provided it must be either
True
,False
,"bodypart"
, or"individual"
. Setting toTrue
defaults as"bodypart"
for multi-animal projects.- show_errors: bool, optional, default=True
Display train and test errors.
- comparisonbodyparts: str or list, optional, default=”all”
The average error will be computed for those body parts only. The provided list has to be a subset of the defined body parts.
- gputouse: int or None, optional, default=None
Indicates the GPU to use (see number in
nvidia-smi
). If you do not have a GPU put None`. See: https://nvidia.custhelp.com/app/answers/detail/a_id/3751/~/useful-nvidia-smi-queries- rescale: bool, optional, default=False
Evaluate the model at the
'global_scale'
variable (as set in thepose_config.yaml
file for a particular project). I.e. every image will be resized according to that scale and prediction will be compared to the resized ground truth. The error will be reported in pixels at rescaled to the original size. I.e. For a [200,200] pixel image evaluated atglobal_scale=.5
, the predictions are calculated on [100,100] pixel images, compared to 1/2*ground truth and this error is then multiplied by 2!. The evaluation images are also shown for the original size!- modelprefix: str, optional, default=””
Directory containing the deeplabcut models to use when evaluating the network. By default, the models are assumed to exist in the project folder.
- per_keypoint_evaluation: bool, default=False
Compute the train and test RMSE for each keypoint, and save the results to a {model_name}-keypoint-results.csv in the evalution-results folder
- snapshots_to_evaluate: List[str], optional, default=None
List of snapshot names to evaluate (e.g. [“snapshot-50000”, “snapshot-75000”, …])
- Returns:
- None
Examples
If you do not want to plot and evaluate with shuffle set to 1.
>>> deeplabcut.evaluate_network( '/analysis/project/reaching-task/config.yaml', Shuffles=[1], )
If you want to plot and evaluate with shuffle set to 0 and 1.
>>> deeplabcut.evaluate_network( '/analysis/project/reaching-task/config.yaml', Shuffles=[0, 1], plotting=True, )
If you want to plot assemblies for a maDLC project
>>> deeplabcut.evaluate_network( '/analysis/project/reaching-task/config.yaml', Shuffles=[1], plotting="individual", )
Note: This defaults to standard plotting for single-animal projects.
(I) Novel Video Analysis:#
The trained network can be used to analyze new videos. The user needs to first choose a checkpoint with the best evaluation results for analyzing the videos. In this case, the user can enter the corresponding index of the checkpoint to the variable snapshotindex in the config.yaml file. By default, the most recent checkpoint (i.e. last) is used for analyzing the video. Novel/new videos DO NOT have to be in the config file! You can analyze new videos anytime by simply using the following line of code:
deeplabcut.analyze_videos(config_path, ['fullpath/analysis/project/videos/reachingvideo1.avi'], save_as_csv=True)
There are several other optional inputs, such as:
deeplabcut.analyze_videos(config_path, videos, videotype='avi', shuffle=1, trainingsetindex=0, gputouse=None, save_as_csv=False, destfolder=None, dynamic=(True, .5, 10))
The labels are stored in a MultiIndex Pandas Array, which contains the name of the network, body part name, (x, y) label position in pixels, and the likelihood for each frame per body part. These
arrays are stored in an efficient Hierarchical Data Format (HDF) in the same directory, where the video is stored.
However, if the flag save_as_csv
is set to True
, the data can also be exported in comma-separated values format
(.csv), which in turn can be imported in many programs, such as MATLAB, R, Prism, etc.; This flag is set to False
by default. You can also set a destination folder (destfolder
) for the output files by passing a path of the folder you wish to write to.
API Docs#
Click the button to see API Docs
- deeplabcut.pose_estimation_tensorflow.predict_videos.analyze_videos(config, videos, videotype='', shuffle=1, trainingsetindex=0, gputouse=None, save_as_csv=False, in_random_order=True, destfolder=None, batchsize=None, cropping=None, TFGPUinference=True, dynamic=(False, 0.5, 10), modelprefix='', robust_nframes=False, allow_growth=False, use_shelve=False, auto_track=True, n_tracks=None, calibrate=False, identity_only=False, use_openvino=None)#
Makes prediction based on a trained network.
The index of the trained network is specified by parameters in the config file (in particular the variable ‘snapshotindex’).
The labels are stored as MultiIndex Pandas Array, which contains the name of the network, body part name, (x, y) label position in pixels, and the likelihood for each frame per body part. These arrays are stored in an efficient Hierarchical Data Format (HDF) in the same directory where the video is stored. However, if the flag save_as_csv is set to True, the data can also be exported in comma-separated values format (.csv), which in turn can be imported in many programs, such as MATLAB, R, Prism, etc.
- Parameters:
- config: str
Full path of the config.yaml file.
- videos: list[str]
A list of strings containing the full paths to videos for analysis or a path to the directory, where all the videos with same extension are stored.
- videotype: str, optional, default=””
Checks for the extension of the video in case the input to the video is a directory. Only videos with this extension are analyzed. If left unspecified, videos with common extensions (‘avi’, ‘mp4’, ‘mov’, ‘mpeg’, ‘mkv’) are kept.
- shuffle: int, optional, default=1
An integer specifying the shuffle index of the training dataset used for training the network.
- trainingsetindex: int, optional, default=0
Integer specifying which TrainingsetFraction to use. By default the first (note that TrainingFraction is a list in config.yaml).
- gputouse: int or None, optional, default=None
Indicates the GPU to use (see number in
nvidia-smi
). If you do not have a GPU putNone
. See: https://nvidia.custhelp.com/app/answers/detail/a_id/3751/~/useful-nvidia-smi-queries- save_as_csv: bool, optional, default=False
Saves the predictions in a .csv file.
- in_random_order: bool, optional (default=True)
Whether or not to analyze videos in a random order. This is only relevant when specifying a video directory in videos.
- destfolder: string or None, optional, default=None
Specifies the destination folder for analysis data. If
None
, the path of the video is used. Note that for subsequent analysis this folder also needs to be passed.- batchsize: int or None, optional, default=None
Change batch size for inference; if given overwrites value in
pose_cfg.yaml
.- cropping: list or None, optional, default=None
List of cropping coordinates as [x1, x2, y1, y2]. Note that the same cropping parameters will then be used for all videos. If different video crops are desired, run
analyze_videos
on individual videos with the corresponding cropping coordinates.- TFGPUinference: bool, optional, default=True
Perform inference on GPU with TensorFlow code. Introduced in “Pretraining boosts out-of-domain robustness for pose estimation” by Alexander Mathis, Mert Yüksekgönül, Byron Rogers, Matthias Bethge, Mackenzie W. Mathis. Source: https://arxiv.org/abs/1909.11229
- dynamic: tuple(bool, float, int) triple containing (state, detectiontreshold, margin)
If the state is true, then dynamic cropping will be performed. That means that if an object is detected (i.e. any body part > detectiontreshold), then object boundaries are computed according to the smallest/largest x position and smallest/largest y position of all body parts. This window is expanded by the margin and from then on only the posture within this crop is analyzed (until the object is lost, i.e. <detectiontreshold). The current position is utilized for updating the crop window for the next frame (this is why the margin is important and should be set large enough given the movement of the animal).
- modelprefix: str, optional, default=””
Directory containing the deeplabcut models to use when evaluating the network. By default, the models are assumed to exist in the project folder.
- robust_nframes: bool, optional, default=False
Evaluate a video’s number of frames in a robust manner. This option is slower (as the whole video is read frame-by-frame), but does not rely on metadata, hence its robustness against file corruption.
- allow_growth: bool, optional, default=False.
For some smaller GPUs the memory issues happen. If
True
, the memory allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. See issue: https://forum.image.sc/t/how-to-stop-running-out-of-vram/30551/2- use_shelve: bool, optional, default=False
By default, data are dumped in a pickle file at the end of the video analysis. Otherwise, data are written to disk on the fly using a “shelf”; i.e., a pickle-based, persistent, database-like object by default, resulting in constant memory footprint.
- The following parameters are only relevant for multi-animal projects:
- auto_track: bool, optional, default=True
By default, tracking and stitching are automatically performed, producing the final h5 data file. This is equivalent to the behavior for single-animal projects.
If
False
, one must runconvert_detections2tracklets
andstitch_tracklets
afterwards, in order to obtain the h5 file.- This function has 3 related sub-calls:
- identity_only: bool, optional, default=False
If
True
and animal identity was learned by the model, assembly and tracking rely exclusively on identity prediction.- calibrate: bool, optional, default=False
If
True
, use training data to calibrate the animal assembly procedure. This improves its robustness to wrong body part links, but requires very little missing data.- n_tracks: int or None, optional, default=None
Number of tracks to reconstruct. By default, taken as the number of individuals defined in the config.yaml. Another number can be passed if the number of animals in the video is different from the number of animals the model was trained on.
- use_openvino: str, optional
Use “CPU” for inference if OpenVINO is available in the Python environment.
- Returns:
- DLCScorer: str
the scorer used to analyze the videos
Examples
Analyzing a single video on Windows
>>> deeplabcut.analyze_videos( 'C:\myproject\reaching-task\config.yaml', ['C:\yourusername\rig-95\Videos\reachingvideo1.avi'], )
Analyzing a single video on Linux/MacOS
>>> deeplabcut.analyze_videos( '/analysis/project/reaching-task/config.yaml', ['/analysis/project/videos/reachingvideo1.avi'], )
Analyze all videos of type
avi
in a folder>>> deeplabcut.analyze_videos( '/analysis/project/reaching-task/config.yaml', ['/analysis/project/videos'], videotype='.avi', )
Analyze multiple videos
>>> deeplabcut.analyze_videos( '/analysis/project/reaching-task/config.yaml', [ '/analysis/project/videos/reachingvideo1.avi', '/analysis/project/videos/reachingvideo2.avi', ], )
Analyze multiple videos with
shuffle=2
>>> deeplabcut.analyze_videos( '/analysis/project/reaching-task/config.yaml', [ '/analysis/project/videos/reachingvideo1.avi', '/analysis/project/videos/reachingvideo2.avi', ], shuffle=2, )
Analyze multiple videos with
shuffle=2
, save results as an additional csv file>>> deeplabcut.analyze_videos( '/analysis/project/reaching-task/config.yaml', [ '/analysis/project/videos/reachingvideo1.avi', '/analysis/project/videos/reachingvideo2.avi', ], shuffle=2, save_as_csv=True, )
Novel Video Analysis: extra features#
Dynamic-cropping of videos:#
As of 2.1+ we have a dynamic cropping option. Namely, if you have large frames and the animal/object occupies a smaller fraction, you can crop around your animal/object to make processing speeds faster. For example, if you have a large open field experiment but only track the mouse, this will speed up your analysis (also helpful for real-time applications). To use this simply add dynamic=(True,.5,10)
when you call analyze_videos
.
dynamic: triple containing (state, detectiontreshold, margin)
If the state is true, then dynamic cropping will be performed. That means that if an object is detected (i.e., any body part > detectiontreshold), then object boundaries are computed according to the smallest/largest x position and smallest/largest y position of all body parts. This window is expanded by the margin and from then on only the posture within this crop is analyzed (until the object is lost; i.e., <detectiontreshold). The current position is utilized for updating the crop window for the next frame (this is why the margin is important and should be set large enough given the movement of the animal).
(J) Filter pose data (RECOMMENDED!):#
You can also filter the predictions with a median filter (default) or with a SARIMAX model, if you wish. This creates a new .h5 file with the ending _filtered that you can use in create_labeled_data and/or plot trajectories.
deeplabcut.filterpredictions(config_path, ['fullpath/analysis/project/videos/reachingvideo1.avi'])
An example call:
deeplabcut.filterpredictions(config_path,['fullpath/analysis/project/videos'], videotype='.mp4',filtertype= 'arima',ARdegree=5,MAdegree=2)
Here are parameters you can modify and pass:
deeplabcut.filterpredictions(config_path, ['fullpath/analysis/project/videos/reachingvideo1.avi'], shuffle=1, trainingsetindex=0, comparisonbodyparts='all', filtertype='arima', p_bound=0.01, ARdegree=3, MAdegree=1, alpha=0.01)
Here is an example of how this can be applied to a video:
API Docs#
Click the button to see API Docs
- deeplabcut.post_processing.filtering.filterpredictions(config, video, videotype='', shuffle=1, trainingsetindex=0, filtertype='median', windowlength=5, p_bound=0.001, ARdegree=3, MAdegree=1, alpha=0.01, save_as_csv=True, destfolder=None, modelprefix='', track_method='', return_data=False)#
Fits frame-by-frame pose predictions.
The pose predictions are fitted with ARIMA model (filtertype=’arima’) or median filter (default).
- Parameters:
- configstring
Full path of the config.yaml file.
- videostring
Full path of the video to extract the frame from. Make sure that this video is already analyzed.
- shuffleint, optional, default=1
The shuffle index of training dataset. The extracted frames will be stored in the labeled-dataset for the corresponding shuffle of training dataset.
- trainingsetindex: int, optional, default=0
Integer specifying which TrainingsetFraction to use. Note that TrainingFraction is a list in config.yaml.
- filtertype: string, optional, default=”median”.
The filter type - ‘arima’, ‘median’ or ‘spline’.
- windowlength: int, optional, default=5
For filtertype=’median’ filters the input array using a local window-size given by windowlength. The array will automatically be zero-padded. https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.medfilt.html. The windowlenght should be an odd number. If filtertype=’spline’, windowlength is the maximal gap size to fill.
- p_bound: float between 0 and 1, optional, default=0.001
For filtertype ‘arima’ this parameter defines the likelihood below, below which a body part will be consided as missing data for filtering purposes.
- ARdegree: int, optional, default=3
For filtertype ‘arima’ Autoregressive degree of Sarimax model degree. see https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html
- MAdegree: int, optional, default=1
For filtertype ‘arima’ Moving Average degree of Sarimax model degree. See https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html
- alpha: float, optional, default=0.01
Significance level for detecting outliers based on confidence interval of fitted SARIMAX model.
- save_as_csv: bool, optional, default=True
Saves the predictions in a .csv file.
- destfolder: string, optional, default=None
Specifies the destination folder for analysis data. If
None
, the path of the video is used by default. Note that for subsequent analysis this folder also needs to be passed.- modelprefix: str, optional, default=””
Directory containing the deeplabcut models to use when evaluating the network. By default, the models are assumed to exist in the project folder.
- track_method: string, optional, default=””
Specifies the tracker used to generate the data. Empty by default (corresponding to a single animal project). For multiple animals, must be either ‘box’, ‘skeleton’, or ‘ellipse’ and will be taken from the config.yaml file if none is given.
- return_data: bool, optional, default=False
If True, returns a dictionary of the filtered data keyed by video names.
- Returns:
- video_to_filtered_df
Dictionary mapping video filepaths to filtered dataframes.
If no videos exist, the dictionary will be empty.
If a video is not analyzed, the corresponding value in the dictionary will be None.
Examples
Arima model:
>>> deeplabcut.filterpredictions( 'C:\myproject\reaching-task\config.yaml', ['C:\myproject\trailtracking-task\test.mp4'], shuffle=3, filterype='arima', ARdegree=5, MAdegree=2, )
Use median filter over 10 bins:
>>> deeplabcut.filterpredictions( 'C:\myproject\reaching-task\config.yaml', ['C:\myproject\trailtracking-task\test.mp4'], shuffle=3, windowlength=10, )
One can then use the filtered rather than the frame-by-frame predictions by calling:
>>> deeplabcut.plot_trajectories( 'C:\myproject\reaching-task\config.yaml', ['C:\myproject\trailtracking-task\test.mp4'], shuffle=3, filtered=True, )
>>> deeplabcut.create_labeled_video( 'C:\myproject\reaching-task\config.yaml', ['C:\myproject\trailtracking-task\test.mp4'], shuffle=3, filtered=True, )
(K) Plot Trajectories:#
The plotting components of this toolbox utilizes matplotlib. Therefore, these plots can easily be customized by the end user. We also provide a function to plot the trajectory of the extracted poses across the analyzed video, which can be called by typing:
deeplabcut.plot_trajectories(config_path, [‘fullpath/analysis/project/videos/reachingvideo1.avi’])
It creates a folder called plot-poses
(in the directory of the video). The plots display the coordinates of body parts vs. time, likelihoods vs time, the x- vs. y- coordinate of the body parts, as well as histograms of consecutive coordinate differences. These plots help the user to quickly assess the tracking performance for a video. Ideally, the likelihood stays high and the histogram of consecutive coordinate differences has values close to zero (i.e. no jumps in body part detections across frames). Here are example plot outputs on a demo video (left):
API Docs#
Click the button to see API Docs
- deeplabcut.utils.plotting.plot_trajectories(config, videos, videotype='', shuffle=1, trainingsetindex=0, filtered=False, displayedbodyparts='all', displayedindividuals='all', showfigures=False, destfolder=None, modelprefix='', imagetype='.png', resolution=100, linewidth=1.0, track_method='')#
Plots the trajectories of various bodyparts across the video.
- Parameters:
- config: str
Full path of the config.yaml file.
- videos: list[str]
Full paths to videos for analysis or a path to the directory, where all the videos with same extension are stored.
- videotype: str, optional, default=””
Checks for the extension of the video in case the input to the video is a directory. Only videos with this extension are analyzed. If left unspecified, videos with common extensions (‘avi’, ‘mp4’, ‘mov’, ‘mpeg’, ‘mkv’) are kept.
- shuffle: int, optional, default=1
Integer specifying the shuffle index of the training dataset.
- trainingsetindex: int, optional, default=0
Integer specifying which TrainingsetFraction to use. Note that TrainingFraction is a list in config.yaml.
- filtered: bool, optional, default=False
Boolean variable indicating if filtered output should be plotted rather than frame-by-frame predictions. Filtered version can be calculated with
deeplabcut.filterpredictions
.- displayedbodyparts: list[str] or str, optional, default=”all”
This select the body parts that are plotted in the video. Either
all
, then all body parts from config.yaml are used, or a list of strings that are a subset of the full list. E.g. [‘hand’,’Joystick’] for the demo Reaching-Mackenzie-2018-08-30/config.yaml to select only these two body parts.- showfigures: bool, optional, default=False
If
True
then plots are also displayed.- destfolder: string or None, optional, default=None
Specifies the destination folder that was used for storing analysis data. If
None
, the path of the video is used.- modelprefix: str, optional, default=””
Directory containing the deeplabcut models to use when evaluating the network. By default, the models are assumed to exist in the project folder.
- imagetype: string, optional, default=”.png”
Specifies the output image format - ‘.tif’, ‘.jpg’, ‘.svg’ and “.png”.
- resolution: int, optional, default=100
Specifies the resolution (in dpi) of saved figures. Note higher resolution figures take longer to generate.
- linewidth: float, optional, default=1.0
Specifies width of line for line and histogram plots.
- track_method: string, optional, default=””
Specifies the tracker used to generate the data. Empty by default (corresponding to a single animal project). For multiple animals, must be either ‘box’, ‘skeleton’, or ‘ellipse’ and will be taken from the config.yaml file if none is given.
- Returns:
- None
Examples
To label the frames
>>> deeplabcut.plot_trajectories( 'home/alex/analysis/project/reaching-task/config.yaml', ['/home/alex/analysis/project/videos/reachingvideo1.avi'], )
(L) Create Labeled Videos:#
Additionally, the toolbox provides a function to create labeled videos based on the extracted poses by plotting the
labels on top of the frame and creating a video. There are two modes to create videos: FAST and SLOW (but higher quality!). If you want to create high-quality videos, please add save_frames=True
. One can use the command as follows to create multiple labeled videos:
deeplabcut.create_labeled_video(config_path, ['fullpath/analysis/project/videos/reachingvideo1.avi','fullpath/analysis/project/videos/reachingvideo2.avi'], save_frames = True/False)
Optionally, if you want to use the filtered data for a video or directory of filtered videos pass filtered=True
, i.e.:
deeplabcut.create_labeled_video(config_path, ['fullpath/afolderofvideos'], videotype='.mp4', filtered=True)
You can also optionally add a skeleton to connect points and/or add a history of points for visualization. To set the “trailing points” you need to pass trailpoints
:
deeplabcut.create_labeled_video(config_path, ['fullpath/afolderofvideos'], videotype='.mp4', trailpoints=10)
To draw a skeleton, you need to first define the pairs of connected nodes (in the config.yaml
file) and set the skeleton color (in the config.yaml
file). There is also a GUI to help you do this, use by calling deeplabcut.SkeletonBuilder(config+path)
!
Here is how the config.yaml
additions/edits should look (for example, on the Openfield demo data we provide):
# Plotting configuration
skeleton: [['snout', 'leftear'], ['snout', 'rightear'], ['leftear', 'tailbase'], ['leftear', 'rightear'], ['rightear','tailbase']]
skeleton_color: white
pcutoff: 0.4
dotsize: 4
alphavalue: 0.5
colormap: jet
Then pass draw_skeleton=True
with the command:
deeplabcut.create_labeled_video(config_path,['fullpath/afolderofvideos'], videotype='.mp4', draw_skeleton = True)
NEW as of 2.2b8: You can create a video with only the “dots” plotted, i.e., in the style of Johansson, by passing keypoints_only=True
:
deeplabcut.create_labeled_video(config_path,['fullpath/afolderofvideos'], videotype='.mp4', keypoints_only=True)
PRO TIP: that the best quality videos are created when save_frames=True
is passed. Therefore, when trailpoints
and draw_skeleton
are used, we highly recommend you also pass save_frames=True
!
This function has various other parameters, in particular the user can set the colormap
, the dotsize
, and alphavalue
of the labels in config.yaml file.
API Docs#
Click the button to see API Docs
- deeplabcut.utils.make_labeled_video.create_labeled_video(config, videos, videotype='', shuffle=1, trainingsetindex=0, filtered=False, fastmode=True, save_frames=False, keypoints_only=False, Frames2plot=None, displayedbodyparts='all', displayedindividuals='all', codec='mp4v', outputframerate=None, destfolder=None, draw_skeleton=False, trailpoints=0, displaycropped=False, color_by='bodypart', modelprefix='', init_weights='', track_method='', superanimal_name='', pcutoff=0.6, skeleton=[], skeleton_color='white', dotsize=8, colormap='rainbow', alphavalue=0.5, overwrite=False, confidence_to_alpha: bool | Callable[[float], float] = False)#
Labels the bodyparts in a video.
Make sure the video is already analyzed by the function
deeplabcut.analyze_videos
.- Parameters:
- configstring
Full path of the config.yaml file.
- videoslist[str]
A list of strings containing the full paths to videos for analysis or a path to the directory, where all the videos with same extension are stored.
- videotype: str, optional, default=””
Checks for the extension of the video in case the input to the video is a directory. Only videos with this extension are analyzed. If left unspecified, videos with common extensions (‘avi’, ‘mp4’, ‘mov’, ‘mpeg’, ‘mkv’) are kept.
- shuffleint, optional, default=1
Number of shuffles of training dataset.
- trainingsetindex: int, optional, default=0
Integer specifying which TrainingsetFraction to use. Note that TrainingFraction is a list in config.yaml.
- filtered: bool, optional, default=False
Boolean variable indicating if filtered output should be plotted rather than frame-by-frame predictions. Filtered version can be calculated with
deeplabcut.filterpredictions
.- fastmode: bool, optional, default=True
If
True
, uses openCV (much faster but less customization of video) instead of matplotlib ifFalse
. You can also “save_frames” individually or not in the matplotlib mode (if you set the “save_frames” variable accordingly). However, using matplotlib to create the frames it therefore allows much more flexible (one can set transparency of markers, crop, and easily customize).- save_frames: bool, optional, default=False
If
True
, creates each frame individual and then combines into a video. Setting this toTrue
is relatively slow as it stores all individual frames.- keypoints_only: bool, optional, default=False
By default, both video frames and keypoints are visible. If
True
, only the keypoints are shown. These clips are an hommage to Johansson movies, see https://www.youtube.com/watch?v=1F5ICP9SYLU and of course his seminal paper: “Visual perception of biological motion and a model for its analysis” by Gunnar Johansson in Perception & Psychophysics 1973.- Frames2plot: List[int] or None, optional, default=None
If not
None
andsave_frames=True
then the frames corresponding to the index will be plotted. For example,Frames2plot=[0,11]
will plot the first and the 12th frame.- displayedbodyparts: list[str] or str, optional, default=”all”
This selects the body parts that are plotted in the video. If
all
, then all body parts from config.yaml are used. If a list of strings that are a subset of the full list. E.g. [‘hand’,’Joystick’] for the demo Reaching-Mackenzie-2018-08-30/config.yaml to select only these body parts.- displayedindividuals: list[str] or str, optional, default=”all”
Individuals plotted in the video. By default, all individuals present in the config will be showed.
- codec: str, optional, default=”mp4v”
Codec for labeled video. For available options, see http://www.fourcc.org/codecs.php. Note that this depends on your ffmpeg installation.
- outputframerate: int or None, optional, default=None
Positive number, output frame rate for labeled video (only available for the mode with saving frames.) If
None
, which results in the original video rate.- destfolder: string or None, optional, default=None
Specifies the destination folder that was used for storing analysis data. If
None
, the path of the video file is used.- draw_skeleton: bool, optional, default=False
If
True
adds a line connecting the body parts making a skeleton on each frame. The body parts to be connected and the color of these connecting lines are specified in the config file.- trailpoints: int, optional, default=0
Number of previous frames whose body parts are plotted in a frame (for displaying history).
- displaycropped: bool, optional, default=False
Specifies whether only cropped frame is displayed (with labels analyzed therein), or the original frame with the labels analyzed in the cropped subset.
- color_bystring, optional, default=’bodypart’
Coloring rule. By default, each bodypart is colored differently. If set to ‘individual’, points belonging to a single individual are colored the same.
- modelprefix: str, optional, default=””
Directory containing the deeplabcut models to use when evaluating the network. By default, the models are assumed to exist in the project folder.
- init_weights: str,
Checkpoint path to the super model
- track_method: string, optional, default=””
Specifies the tracker used to generate the data. Empty by default (corresponding to a single animal project). For multiple animals, must be either ‘box’, ‘skeleton’, or ‘ellipse’ and will be taken from the config.yaml file if none is given.
- overwrite: bool, optional, default=False
If
True
overwrites existing labeled videos.- confidence_to_alpha: Union[bool, Callable[[float], float], default=False
If False, all keypoints will be plot with alpha=1. Otherwise, this can be defined as a function f: [0, 1] -> [0, 1] such that the alpha value for a keypoint will be set as a function of its score: alpha = f(score). The default function used when True is f(x) = max(0, (x - pcutoff)/(1 - pcutoff)).
- Returns:
- resultslist[bool]
True
if the video is successfully created for each item invideos
.
Examples
Create the labeled video for a single video
>>> deeplabcut.create_labeled_video( '/analysis/project/reaching-task/config.yaml', ['/analysis/project/videos/reachingvideo1.avi'], )
Create the labeled video for a single video and store the individual frames
>>> deeplabcut.create_labeled_video( '/analysis/project/reaching-task/config.yaml', ['/analysis/project/videos/reachingvideo1.avi'], fastmode=True, save_frames=True, )
Create the labeled video for multiple videos
>>> deeplabcut.create_labeled_video( '/analysis/project/reaching-task/config.yaml', [ '/analysis/project/videos/reachingvideo1.avi', '/analysis/project/videos/reachingvideo2.avi', ], )
Create the labeled video for all the videos with an .avi extension in a directory.
>>> deeplabcut.create_labeled_video( '/analysis/project/reaching-task/config.yaml', ['/analysis/project/videos/'], )
Create the labeled video for all the videos with an .mp4 extension in a directory.
>>> deeplabcut.create_labeled_video( '/analysis/project/reaching-task/config.yaml', ['/analysis/project/videos/'], videotype='mp4', )
Extract “Skeleton” Features:#
NEW, as of 2.0.7+: You can save the “skeleton” that was applied in create_labeled_videos
for more computations. Namely, it extracts length and orientation of each “bone” of the skeleton as defined in the config.yaml file. You can use the function by:
deeplabcut.analyzeskeleton(config, video, videotype='avi', shuffle=1, trainingsetindex=0, save_as_csv=False, destfolder=None)
API Docs#
Click the button to see API Docs
- deeplabcut.post_processing.analyze_skeleton.analyzeskeleton(config, videos, videotype='', shuffle=1, trainingsetindex=0, filtered=False, save_as_csv=False, destfolder=None, modelprefix='', track_method='', return_data=False)#
Extracts length and orientation of each “bone” of the skeleton.
The bone and skeleton information is defined in the config file.
- Parameters:
- config: str
Full path of the config.yaml file.
- videos: list[str]
The full paths to videos for analysis or a path to the directory, where all the videos with same extension are stored.
- videotype: str, optional, default=””
Checks for the extension of the video in case the input to the video is a directory. Only videos with this extension are analyzed. If left unspecified, videos with common extensions (‘avi’, ‘mp4’, ‘mov’, ‘mpeg’, ‘mkv’) are kept.
- shuffleint, optional, default=1
The shuffle index of training dataset. The extracted frames will be stored in the labeled-dataset for the corresponding shuffle of training dataset.
- trainingsetindex: int, optional, default=0
Integer specifying which TrainingsetFraction to use. Note that TrainingFraction is a list in config.yaml.
- filtered: bool, optional, default=False
Boolean variable indicating if filtered output should be plotted rather than frame-by-frame predictions. Filtered version can be calculated with
deeplabcut.filterpredictions
.- save_as_csv: bool, optional, default=False
Saves the predictions in a .csv file.
- destfolder: string or None, optional, default=None
Specifies the destination folder for analysis data. If
None
, the path of the video is used. Note that for subsequent analysis this folder also needs to be passed.- modelprefix: str, optional, default=””
Directory containing the deeplabcut models to use when evaluating the network. By default, the models are assumed to exist in the project folder.
- track_method: string, optional, default=””
Specifies the tracker used to generate the data. Empty by default (corresponding to a single animal project). For multiple animals, must be either ‘box’, ‘skeleton’, or ‘ellipse’ and will be taken from the config.yaml file if none is given.
- return_data: bool, optional, default=False
If True, returns a dictionary of the filtered data keyed by video names.
- Returns:
- video_to_skeleton_df
Dictionary mapping video filepaths to skeleton dataframes.
If no videos exist, the dictionary will be empty.
If a video is not analyzed, the corresponding value in the dictionary will be None.
(M) Optional Active Learning -> Network Refinement: Extract Outlier Frames#
While DeepLabCut typically generalizes well across datasets, one might want to optimize its performance in various, perhaps unexpected, situations. For generalization to large data sets, images with insufficient labeling performance can be extracted, manually corrected by adjusting the labels to increase the training set and iteratively improve the feature detectors. Such an active learning framework can be used to achieve a predefined level of confidence for all images with minimal labeling cost (discussed in Mathis et al 2018). Then, due to the large capacity of the neural network that underlies the feature detectors, one can continue training the network with these additional examples. One does not necessarily need to correct all errors as common errors could be eliminated by relabeling a few examples and then re-training. A priori, given that there is no ground truth data for analyzed videos, it is challenging to find putative “outlier frames”. However, one can use heuristics such as the continuity of body part trajectories, to identify images where the decoder might make large errors.
All this can be done for a specific video by typing (see other optional inputs below):
deeplabcut.extract_outlier_frames(config_path, ['videofile_path'])
We provide various frame-selection methods for this purpose. In particular the user can set:
outlieralgorithm: 'fitting', 'jump', or 'uncertain'``
• select frames if the likelihood of a particular or all body parts lies below pbound (note this could also be due to
occlusions rather than errors); (outlieralgorithm='uncertain'
), but also set p_bound
.
• select frames where a particular body part or all body parts jumped more than \uf pixels from the last frame (outlieralgorithm='jump'
).
• select frames if the predicted body part location deviates from a state-space model fit to the time series
of individual body parts. Specifically, this method fits an Auto Regressive Integrated Moving Average (ARIMA)
model to the time series for each body part. Thereby each body part detection with a likelihood smaller than
pbound is treated as missing data. Putative outlier frames are then identified as time points, where the average body part estimates are at least \uf pixel away from the fits. The parameters of this method are \uf, pbound, the ARIMA parameters as well as the list of body parts to average over (can also be all
).
• manually select outlier frames based on visual inspection from the user (outlieralgorithm='manual'
).
As an example:
deeplabcut.extract_outlier_frames(config_path, ['videofile_path'], outlieralgorithm='manual')
In general, depending on the parameters, these methods might return much more frames than the user wants to
extract (numframes2pick
). Thus, this list is then used to select outlier frames either by randomly sampling from this
list (extractionalgorithm='uniform'
), by performing extractionalgorithm='k-means'
clustering on the corresponding frames.
In the automatic configuration, before the frame selection happens, the user is informed about the amount of frames satisfying the criteria and asked if the selection should proceed. This step allows the user to perhaps change the parameters of the frame-selection heuristics first (i.e. to make sure that not too many frames are qualified). The user can run the extract_outlier_frames iteratively, and (even) extract additional frames from the same video. Once enough outlier frames are extracted the refinement GUI can be used to adjust the labels based on user feedback (see below).
API Docs#
Click the button to see API Docs
- deeplabcut.refine_training_dataset.outlier_frames.extract_outlier_frames(config, videos, videotype='', shuffle=1, trainingsetindex=0, outlieralgorithm='jump', frames2use=None, comparisonbodyparts='all', epsilon=20, p_bound=0.01, ARdegree=3, MAdegree=1, alpha=0.01, extractionalgorithm='kmeans', automatic=False, cluster_resizewidth=30, cluster_color=False, opencv=True, savelabeled=False, copy_videos=False, destfolder=None, modelprefix='', track_method='')#
Extracts the outlier frames.
Extracts the outlier frames if the predictions are not correct for a certain video from the cropped video running from start to stop as defined in config.yaml.
Another crucial parameter in config.yaml is how many frames to extract
numframes2extract
.- Parameters:
- config: str
Full path of the config.yaml file.
- videoslist[str]
The full paths to videos for analysis or a path to the directory, where all the videos with same extension are stored.
- videotype: str, optional, default=””
Checks for the extension of the video in case the input to the video is a directory. Only videos with this extension are analyzed. If left unspecified, videos with common extensions (‘avi’, ‘mp4’, ‘mov’, ‘mpeg’, ‘mkv’) are kept.
- shuffleint, optional, default=1
The shuffle index of training dataset. The extracted frames will be stored in the labeled-dataset for the corresponding shuffle of training dataset.
- trainingsetindex: int, optional, default=0
Integer specifying which TrainingsetFraction to use. Note that TrainingFraction is a list in config.yaml.
- outlieralgorithm: str, optional, default=”jump”.
String specifying the algorithm used to detect the outliers.
'Fitting'
fits a Auto Regressive Integrated Moving Average model to the data and computes the distance to the estimated data. Larger distances than epsilon are then potentially identified as outliers'jump'
identifies larger jumps than ‘epsilon’ in any body part'uncertain'
looks for frames with confidence below p_bound'manual'
launches a GUI from which the user can choose the frames'list'
looks for user to provide a list of frame numbers to use, ‘frames2use’. In this case,'extractionalgorithm'
is forced to be'uniform.'
- frames2use: list[str], optional, default=None
If
'outlieralgorithm'
is'list'
, provide the list of frames here.- comparisonbodyparts: list[str] or str, optional, default=”all”
This selects the body parts for which the comparisons with the outliers are carried out. If
"all"
, then all body parts from config.yaml are used. If a list of strings that are a subset of the full list E.g. [‘hand’,’Joystick’] for the demo Reaching-Mackenzie-2018-08-30/config.yaml to select only these body parts.- p_bound: float between 0 and 1, optional, default=0.01
For outlieralgorithm
'uncertain'
this parameter defines the likelihood below which a body part will be flagged as a putative outlier.- epsilon: float, optional, default=20
If
'outlieralgorithm'
is'fitting'
, this is the float bound according to which frames are picked when the (average) body part estimate deviates from model fit.If
'outlieralgorithm'
is'jump'
, this is the float bound specifying the distance by which body points jump from one frame to next (Euclidean distance).- ARdegree: int, optional, default=3
For outlieralgorithm
'fitting'
: Autoregressive degree of ARIMA model degree. (Note we use SARIMAX without exogeneous and seasonal part) See https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html- MAdegree: int, optional, default=1
For outlieralgorithm
'fitting'
: MovingAvarage degree of ARIMA model degree. (Note we use SARIMAX without exogeneous and seasonal part) See https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html- alpha: float, optional, default=0.01
Significance level for detecting outliers based on confidence interval of fitted ARIMA model. Only the distance is used however.
- extractionalgorithmstr, optional, default=”kmeans”
String specifying the algorithm to use for selecting the frames from the identified putatative outlier frames. Currently, deeplabcut supports either
kmeans
oruniform
based selection (same logic as for extract_frames).- automaticbool, optional, default=False
If
True
, extract outliers without being asked for user feedback.- cluster_resizewidth: number, default=30
If
"extractionalgorithm"
is"kmeans"
, one can change the width to which the images are downsampled (aspect ratio is fixed).- cluster_color: bool, optional, default=False
If
False
, each downsampled image is treated as a grayscale vector (discarding color information). IfTrue
, then the color channels are considered. This increases the computational complexity.- opencv: bool, optional, default=True
Uses openCV for loading & extractiong (otherwise moviepy (legacy)).
- savelabeled: bool, optional, default=False
If
True
, frame are saved with predicted labels in each folder.- copy_videos: bool, optional, default=False
If True, newly-added videos (from which outlier frames are extracted) are copied to the project folder. By default, symbolic links are created instead.
- destfolder: str or None, optional, default=None
Specifies the destination folder that was used for storing analysis data. If
None
, the path of the video is used.- modelprefix: str, optional, default=””
Directory containing the deeplabcut models to use when evaluating the network. By default, the models are assumed to exist in the project folder.
- track_method: str, optional, default=””
Specifies the tracker used to generate the data. Empty by default (corresponding to a single animal project). For multiple animals, must be either ‘box’, ‘skeleton’, or ‘ellipse’ and will be taken from the config.yaml file if none is given.
- Returns:
- None
Examples
Extract the frames with default settings on Windows.
>>> deeplabcut.extract_outlier_frames( 'C:\myproject\reaching-task\config.yaml', ['C:\yourusername\rig-95\Videos\reachingvideo1.avi'], )
Extract the frames with default settings on Linux/MacOS.
>>> deeplabcut.extract_outlier_frames( '/analysis/project/reaching-task/config.yaml', ['/analysis/project/video/reachinvideo1.avi'], )
Extract the frames using the “kmeans” algorithm.
>>> deeplabcut.extract_outlier_frames( '/analysis/project/reaching-task/config.yaml', ['/analysis/project/video/reachinvideo1.avi'], extractionalgorithm='kmeans', )
Extract the frames using the “kmeans” algorithm and
"epsilon=5"
pixels.>>> deeplabcut.extract_outlier_frames( '/analysis/project/reaching-task/config.yaml', ['/analysis/project/video/reachinvideo1.avi'], epsilon=5, extractionalgorithm='kmeans', )
(N) Refine Labels: Augmentation of the Training Dataset#
Based on the performance of DeepLabCut, four scenarios are possible:
(A) Visible body part with accurate DeepLabCut prediction. These labels do not need any modifications.
(B) Visible body part but wrong DeepLabCut prediction. Move the label’s location to the actual position of the body part.
(C) Invisible, occluded body part. Remove the predicted label by DeepLabCut with a middle click. Every predicted label is shown, even when DeepLabCut is uncertain. This is necessary, so that the user can potentially move the predicted label. However, to help the user to remove all invisible body parts the low-likelihood predictions are shown as open circles (rather than disks).
(D) Invalid images: In the unlikely event that there are any invalid images, the user should remove such an image and their corresponding predictions, if any. Here, the GUI will prompt the user to remove an image identified as invalid.
The labels for extracted putative outlier frames can be refined by opening the GUI:
deeplabcut.refine_labels(config_path)
This will launch a GUI where the user can refine the labels.
Use the ‘Load Labels’ button to select one of the subdirectories, where the extracted frames are stored. Every label will be identified by a unique color. For better chances to identify the low-confidence labels, specify the threshold of the likelihood. This changes the body parts with likelihood below this threshold to appear as circles and the ones above as solid disks while retaining the same color scheme. Next, to adjust the position of the label, hover the mouse over the labels to identify the specific body part, left click and drag it to a different location. To delete a specific label, middle click on the label (once a label is deleted, it cannot be retrieved).
After correcting the labels for all the frames in each of the subdirectories, the users should merge the data set to create a new dataset. In this step the iteration parameter in the config.yaml file is automatically updated.
deeplabcut.merge_datasets(config_path)
Once the dataset is merged, the user can test if the merging process was successful by plotting all the labels (Step E).
Next, with this expanded training set the user can now create a novel training set and train the network as described
in Steps F and G. The training dataset will be stored in the same place as before but under a different iteration #
subdirectory, where the #
is the new value of iteration
variable stored in the project’s configuration file (this is
automatically done).
Now you can run create_training_dataset
, then train_network
, etc. If your original labels were adjusted at all, start from fresh weights (the typically recommended path anyhow), otherwise consider using your already trained network weights (see Box 2).
If after training the network generalizes well to the data, proceed to analyze new videos. Otherwise, consider labeling more data.
API Docs for deeplabcut.refine_labels#
Click the button to see API Docs
API Docs for deeplabcut.merge_datasets#
Click the button to see API Docs
- deeplabcut.refine_training_dataset.outlier_frames.merge_datasets(config, forceiterate=None)#
Merge the original training dataset with the newly refined data.
Checks if the original training dataset can be merged with the newly refined training dataset. To do so it will check if the frames in all extracted video sets were relabeled.
If this is the case then the
"iteration"
variable is advanced by 1.- Parameters:
- config: str
Full path of the config.yaml file.
- forceiterate: int or None, optional, default=None
If an integer is given the iteration variable is set to this value This is only done if all datasets were labeled or refined.
Examples
>>> deeplabcut.merge_datasets('/analysis/project/reaching-task/config.yaml')
Jupyter Notebooks for Demonstration of the DeepLabCut Workflow#
We also provide two Jupyter notebooks for using DeepLabCut on both a pre-labeled dataset, and on the end user’s own dataset. Firstly, we prepared an interactive Jupyter notebook called run_yourowndata.ipynb that can serve as a template for the user to develop a project. Furthermore, we provide a notebook for an already started project with labeled data. The example project, named as Reaching-Mackenzie-2018-08-30 consists of a project configuration file with default parameters and 20 images, which are cropped around the region of interest as an example dataset. These images are extracted from a video, which was recorded in a study of skilled motor control in mice. Some example labels for these images are also provided. See more details here.
3D Toolbox#
Please see 3D overview for information on using the 3D toolbox of DeepLabCut (as of 2.0.7+).
Other functions, some are yet-to-be-documented:#
We suggest you check out these additional helper functions, that could be useful (they are all optional).