Idtracker.ai will generate a
session_[SESSION_NAME] folder in the same directory where the input videos are (or in the
--output_dir path if specified, see Output). The session folder may be have the following structure:
session_[SESSION_NAME] ├─ accumulation_* │ ├─ identification_network.model.pth │ ├─ list_of_fragments.json │ └─ model_params.json ├─ crossings_detector │ └─ crossing_detector.model.pth ├─ identification_images │ └─ id_images_*.hdf5 ├─ preprocessing │ ├─ background.png │ ├─ list_of_blobs.pickle │ ├─ list_of_fragments.json │ ├─ list_of_global_fragments.json │ └─ ROI_mask.png ├─ segmentation_data │ └─ episode_images_*.hdf5 ├─ trajectories │ ├─ with_gaps_csv │ │ ├─ areas.csv │ │ ├─ id_probabilities.csv │ │ ├─ trajectories.csv │ │ └─ attributes.json │ ├─ without_gaps_csv │ │ ├─ areas.csv │ │ ├─ id_probabilities.csv │ │ ├─ trajectories.csv │ │ └─ attributes.json │ ├─ with_gaps.npy │ └─ without_gaps.npy ├─ session.json └─ idtrackerai.log
The trajectories folder has been highlighted above, it contain the most valuable data for the end user, the position of every animal in every video frame. See how to read them in Trajectory files.
In the session folder there’s a copy of the session log
idtrackerai.log made at the end of the process (successful or not). This file contains information of the entire tracking process and is essential to debug and understand idtracker.ai (see Tracking log).
The majority of the generated data is a byproduct of the tracking process and it is not meant to be read or used by the end user. Still, an intuition of the data content can be read as:
accumulation_*contains the identification network model and parameters. It can be used to match identities with other sessions with Idmatcher.ai.
crossings_detectorcontains the individual/crossing classification network model and parameters.
identification_imagescontains the images used for identification. This is, an image for every animal and every frame on the video.
preprocessingcontains the blobs, fragments and global fragments objects (in Python pickle format). Advanced users can dive into these objects to gather some extra information about the tracking. Also, the ROI and the computed background are saved here.
segmentation_datacontains the temporal image used to generate the final identification images.
session.jsoncontains basic properties of the video and the session in human readable .json format.
The most useful files for the end user are the trajectory files, located in the folder trajectories. The main ones are the binary .npy formatted files and, once the tracking process finishes successfully, they can be loaded in Python with:
import numpy as np trajectories_dict = np.load( "./session_example/trajectories/without_gaps.npy", allow_pickle=True ).item()
Since .npy files can only be loaded with Numpy (Python). Idtrackerai automatically generates a copy of these files in human readable .csv and .json formats.
If you have an old session with its trajectory files not translated to .csv, you still can convert these files by running
The .npy files contain a Python dictionary with the following keys:
trajectories: Numpy array with shape (N_frames, N_animals, 2) with the xy coordinate for each identity and frame in the video.
version: idtracker.ai version which created the current file.
video_paths: input video paths.
frames_per_second: input video frame rate.
body_length: mean body length computed as the mean value of the diagonal of all individual blob’s bounding boxes.
stats: dictionary containing four different measurements of the session’s identification accuracy.
areas: dictionary containing the mean, median and standard deviation of the blobs area for each individual.
setup_points: dictionary of the user defined setup points (from validator).
identities_labels: list of user defined identity labels (from validator).
identities_groups: list of user defined identity groups (from validator).
id_probabilities: Numpy array with shape (N_frames, N_animals) with the identity assignment probability for each individual and frame of the video.
body_length is not a reliable measurement of the real size of the animal. Its value depends on the segmentation parameters and the video conditions.
Types of trajectory files#
When crossings occur, the identification network cannot be applied and the involved individuals cannot be located properly. In these situations, the trajectories have a gap full of NaN values, i.e. the individual couldn’t be located. These trajectories are saved in
To close the gaps, an interpolation algorithm takes place and generates an improved
without_gaps.npy file where most of the gaps have been closed. Some gaps are difficult to close and there’s no guarantee for
without_gaps.npy not to contain any NaN gap.
When tracking without identities, the trajectories will be saved only in
with_gaps.npy. Since there are random identity assignments, the interpolation algorithm for closing gaps cannot be applied.
Finally, if the Validator is used after the tracking, the
validated.npy file will contain the trajectories manually corrected by the user.