idtrackerai

Fingerprint protocol cascade

Accumulation manager

class accumulation_manager.AccumulationManager(video, list_of_fragments, list_of_global_fragments, certainty_threshold=0.1, allow_partial_accumulation=False, threshold_acceptable_accumulation=None)[source]

Manages the accumulation process.

Attributes

video (<Video object>) Object containing all the parameters of the video.
number_of_animals (int) Number of animals to be tracked
list_of_fragments (ListOfFragments) Collection of individual and crossing fragments with associated methods
list_of_global_fragments: ListOfGlobalFragments Collection of global fragments
counter (int) Number of iterations for an instantiation
certainty_threshold: float Value $in [0,1]$ to establish if the identitification of a fragment is certain.
threshold_acceptable_accumulation: float Value $in [0,1]$ to establish if an accumulation is acceptable
accumulation_strategy: string Accepts “global” and “partial” in order to perform either partial or global accumulation.
individual_fragments_used: list list with the individual_fragments_identifiers of the individual fragments used for training
used_images (nd.array) images used for training the network
used_labels (nd.array) labels for the images used for training
new_images (nd.array) set of images that will be added to the new training
new_labels (nd.array) labels for the set of images that will be added for training
_continue_accumulation (bool) allows the accumulation to continue according to the stopping criteria

Methods

assign_identities_to_fragments_used_for_training() assign the identities to the global fragments used for training and their individual fragments.
check_if_is_acceptable_for_training(…) Check if global_fragment is acceptable for training
get_P1_array_and_argsort() Given a global fragment computes P1 for each of its individual fragments and returns a matrix of sorted indices according to P1
get_acceptable_global_fragments_for_training(…) Assigns identities during test to individual fragments and rank them according to the score computed from the certainty of identification and the minimum distance travelled
get_images_and_labels_for_training() Create a new dataset of labelled images to train the idCNN in the following way: Per individual select MAXIMAL_IMAGES_PER_ANIMAL images.
get_new_images_and_labels() get the images and labels of the new global fragments that are going to be used for training, this function checks whether the images of a individual fragment have been added before
is_not_certain(certainty_threshold) State if a fragment has been assigned with sufficient certainty
p1_below_random(index_individual_fragment, …) Evaluate if a fragment has been assigned with a certainty lower than random (wrt the number of possible identities)
reset_accumulation_variables() After an accumulation is finished reinitialise the variables involved in the process.
reset_non_acceptable_fragment(fragment) Resets the collection of non-acceptable fragments.
reset_non_acceptable_global_fragment(…) Reset the flag for non-accpetable global fragments.
set_fragment_temporary_id(temporary_id, …) Given a P1 array relative to a global fragment sets to 0 the row relative to fragment which is temporarily identified with identity temporary_id
split_predictions_after_network_assignment(…) Gathers predictions relative to fragment images from the GPU and splits them according to their organisation in fragments.
update_counter() Update iteration counter
update_fragments_used_for_training() Once a global fragment has been used for training, sets the flags used_for_training to TRUE and acceptable_for_training to FALSE
update_individual_fragments_used_for_training() Returns the individual fragments used for training.
update_list_of_individual_fragments_used() Updates the list of individual fragments used for training and their identities.
update_used_images_and_labels() Sets as used the images already used for training
assign_identities_to_fragments_used_for_training()[source]

assign the identities to the global fragments used for training and their individual fragments. This function checks that the identities of the individual fragments in the global fragment are consistent with the previously assigned identities

check_if_is_acceptable_for_training(global_fragment)[source]

Check if global_fragment is acceptable for training

Parameters:

global_fragment : GlobalFragment

Object collecting the individual fragments relative to a part of the video in which all the animals are visible

continue_accumulation

We stop the accumulation when there are not more global fragments that are acceptable for training

static get_P1_array_and_argsort()[source]

Given a global fragment computes P1 for each of its individual fragments and returns a matrix of sorted indices according to P1

Parameters:

global_fragment : GlobalFragment object

Collection of images relative to a part of the video in which all the animals are visible.

Returns:

P1_array : nd.array

P1 computed for every individual fragment in the global fragment

index_individual_fragments_sorted_by_P1_max_to_min : nd.array

Argsort of P1 array of each individual fragment

get_acceptable_global_fragments_for_training(candidate_individual_fragments_identifiers)[source]

Assigns identities during test to individual fragments and rank them according to the score computed from the certainty of identification and the minimum distance travelled

get_images_and_labels_for_training()[source]

Create a new dataset of labelled images to train the idCNN in the following way: Per individual select MAXIMAL_IMAGES_PER_ANIMAL images. Such collection of images is composed of a ratio corresponding to RATIO_NEW of new images (acquired in the current evaluation of the global fragments) and RATIO_OLD of images already used in the previous iteration.

get_new_images_and_labels()[source]

get the images and labels of the new global fragments that are going to be used for training, this function checks whether the images of a individual fragment have been added before

static is_not_certain(certainty_threshold)[source]

State if a fragment has been assigned with sufficient certainty

Parameters:

fragment : Fragment object

Collection of images related to the same individual

certainty_threshold : float

Lower boundary in [0,1] for the certainty of a fragment

Returns:

is_not_certain_flag : bool

True if the fragment is assigned with high enough certainty

static p1_below_random(index_individual_fragment, fragment)[source]

Evaluate if a fragment has been assigned with a certainty lower than random (wrt the number of possible identities)

Parameters:

P1_array : nd.array

P1 vector of a fragment object

index_individual_fragment : nd.array

Argsort of the P1 array of fragment

fragment : Fragment

Fragment object containing images associated with a single individual

Returns:

p1_below_random_flag : bool

True if a fragment has been identified with a certainty below random

reset_accumulation_variables()[source]

After an accumulation is finished reinitialise the variables involved in the process.

reset_non_acceptable_fragment(fragment)[source]

Resets the collection of non-acceptable fragments.

Parameters:

fragment : Fragment object

Collection of images related to the same individual

reset_non_acceptable_global_fragment(global_fragment)[source]

Reset the flag for non-accpetable global fragments.

Parameters:

global_fragment : GlobalFragment object

Collection of images relative to a part of the video in which all the animals are visible.

static set_fragment_temporary_id(temporary_id, P1_array, index_individual_fragment)[source]

Given a P1 array relative to a global fragment sets to 0 the row relative to fragment which is temporarily identified with identity temporary_id

Parameters:

fragment : Fragment

Fragment object containing images associated with a single individual

temporary_id : int

temporary identifier associated to fragment

P1_array : nd.array

P1 vector of fragment

index_individual_fragment : int

Index of fragment with respect to a global fragment in which it is contained

Returns:

P1_array : nd.array

updated P1 array

split_predictions_after_network_assignment(predictions, softmax_probs, indices_to_split, candidate_individual_fragments_identifiers)[source]

Gathers predictions relative to fragment images from the GPU and splits them according to their organisation in fragments.

update_counter()[source]

Update iteration counter

update_fragments_used_for_training()[source]

Once a global fragment has been used for training, sets the flags used_for_training to TRUE and acceptable_for_training to FALSE

update_individual_fragments_used_for_training()[source]

Returns the individual fragments used for training.

Returns:

individual_fragments_used_for_training : list

List of Fragment objects.

update_list_of_individual_fragments_used()[source]

Updates the list of individual fragments used for training and their identities. If an individual fragment was added before is not added again.

update_used_images_and_labels()[source]

Sets as used the images already used for training

accumulation_manager.get_predictions_of_candidates_fragments(net, video, fragments)[source]

Get predictions of individual fragments that have been used to train the idCNN in an accumulation’s iteration

Parameters:

net : ConvNetwork object

network used to identify the animals

video : Video object

Object containing all the parameters of the video.

fragments : list

List of fragment objects

Returns:

assigner._predictions : nd.array

predictions associated to each image organised by individual fragments

assigner._softmax_probs : np.array

softmax vector associated to each image organised by individual fragments

np.cumsum(lengths)[:-1] : nd.array

cumulative sum of the number of images contained in every fragment (used to rebuild the collection of images per fragment after gathering predicions and softmax vectors from the gpu)

candidate_individual_fragments_identifiers : list

list of fragment identifiers

Accumulator

The accumulator module contains the main routine used to compute the accumulation process, which is an essential part of both the second and third fingerprint protocol.

accumulator.accumulate(accumulation_manager, video, global_step, net, identity_transfer)[source]

take care of managing the process of accumulation of labelled images. Such process, in complex video, allows us to train the idCNN (or whatever function approximator passed in input as net).

Parameters:

accumulation_manager : <accumulation_manager.AccumulationManager object>

Description of parameter accumulation_manager.

video : <video.Video object>

Object collecting all the parameters of the video and paths for saving and loading

global_step : int

network epoch counter

net : <net.ConvNetwork object>

Convolutional neural network object created according to net.params

identity_transfer : bool

If true the identity of the individual is also tranferred

Returns:

float

Ratio of accumulated images

accumulator.early_stop_criteria_for_accumulation(number_of_accumulated_images, number_of_unique_images_in_global_fragments)[source]

A particularly succesful accumulation causes an early stop of the training and accumulaton process. This function returns the value, expressed as a ratio that is evaluated to trigger this behaviour.

Parameters:

number_of_accumulated_images : int

Number of images used during the accumulation process (the labelled dataset used to train the network is subsampled from this set of images).

number_of_unique_images_in_global_fragments : int

Total number of accumulable images.

Returns:

float

Ratio of accumulated images over accumulable images

Assigner

assigner.assign(net, images, print_flag)[source]

Gathers the predictions relative to the images contained in images. Such predictions are returned as attributes of assigner.

Parameters:

net : <ConvNetwork object>

Convolutional neural network object created according to net.params

images : ndarray

array of images

print_flag : bool

If True additional information gathered while getting the predictions are displayed in the terminal

Returns:

<GetPrediction object>

The assigner object has as main attributes the list of predictions associated to images and the the corresponding softmax vectors

See also

GetPrediction

assigner.assign_identity(list_of_fragments)[source]

Identifies the individual fragments recursively, based on the value of P2

Parameters:

list_of_fragments : <ListOfFragments object>

collection of the individual fragments and associated methods

assigner.assigner(list_of_fragments, video, net)[source]

This is the main function of this method: given a list_of_fragments it puts in place the routine to identify, if possible, each of the individual fragments. The starting point for the identification is given by the predictions produced by the ConvNetwork net passed as input. The organisation of the images in individual fragments is then used to assign more accurately.

Parameters:

list_of_fragments : <ListOfFragments object>

collection of the individual fragments and associated methods

video : <Video object>

Object collecting all the parameters of the video and paths for saving and loading

net : <ConvNetwork object>

Convolutional neural network object created according to net.params

See also

ListOfFragments.reset, ListOfFragments.get_images_from_fragments_to_assign, assign, compute_identification_statistics_for_non_accumulated_fragments

assigner.compute_identification_statistics_for_non_accumulated_fragments(fragments, assigner, number_of_animals=None)[source]

Given the predictions associated to the images in each (individual) fragment in the list fragments if computes the statistics necessary for the identification of fragment.

Parameters:

fragments : list

List of individual fragment objects

assigner : <GetPrediction object>

The assigner object has as main attributes the list of predictions associated to images and the the corresponding softmax vectors

number_of_animals : int

number of animals to be tracked

Trainer

Trains an instance of the class ConvNetwork

trainer.train(video, fragments, net, images, labels, store_accuracy_and_error, check_for_loss_plateau, save_summaries, print_flag, plot_flag, global_step=0, identity_transfer=False, accumulation_manager=None, batch_size=50)[source]

Short summary.

Parameters:

video : <Video object>

an instance of the class Video

fragments : list

list of instances of the class Fragment

net : <ConvNetwork object>

an instance of the class ConvNetwork

images : ndarray

array of shape [number_of_images, height, width]

labels : type

array of shape [number_of_images, number_of_animals]

store_accuracy_and_error : bool

if True the values of the loss function, accuracy and individual accuracy will be stored

check_for_loss_plateau : bool

if True the stopping criteria (see stop_training_criteria) will automatically stop the training in case the loss functin computed for the validation set of images reaches a plateau

sasave_summaries : bool

if True tensorflow summaries will be generated and stored to allow tensorboard visualisation of both loss and activity histograms

print_flag : bool

if True additional information are printed in the terminal

plot_flag : bool

if True training and validation loss, accuracy and individual accuracy are plot in a graph at the end of the training session

global_epoch : int

global counter of the training epoch in pretraining

accumulation_manager : <AccumulationManager object>

an instance of the class AccumulationManager

batch_size : int

size of the batch of images used for training

Returns:

int

global epoch counter updated after the training session

<ConvNetwork object>

network with updated parameters after training

float

ration of images used for pretraining over the total number of available images

<Store_Accuracy_and_Loss object>

updated with the values collected on the training set of labelled images

<Store_Accuracy_and_Loss object>

updated with the values collected on the validation set of labelled images

Pretraining

Pretrains the network as the first step of the third fingerprint protocol.

pre_trainer.pre_train(video, list_of_fragments, list_of_global_fragments, params, store_accuracy_and_error, check_for_loss_plateau, save_summaries, print_flag, plot_flag)[source]

Performs pretraining by iterating on the list of global fragments sorted by distance travelled, until the threshold MAX_RATIO_OF_PRETRAINED_IMAGES is reached

Parameters:

video : <Video object>

an instance of the class Video

list_of_fragments : <ListOfFragments object>

an instance of the class ListOfFragments

list_of_global_fragments : <ListOfGlobalFragments object>

an instance of the class ListOfGlobalFragments

params : <NetworkParams object>

an instance of the class NetworkParams

store_accuracy_and_error : bool

if True the values of the loss function, accuracy and individual accuracy will be stored

check_for_loss_plateau : bool

if True the stopping criteria (see stop_training_criteria) will automatically stop the training in case the loss functin computed for the validation set of images reaches a plateau

save_summaries : bool

if True tensorflow summaries will be generated and stored to allow tensorboard visualisation of both loss and activity histograms

print_flag : bool

if True additional information are printed in the terminal

plot_flag : bool

if True training and validation loss, accuracy and individual accuracy are plot in a graph at the end of the training session

Returns:

<ConvNetwork object>

an instance of the class ConvNetwork

pre_trainer.pre_train_global_fragment(net, pretraining_global_fragment, list_of_fragments, global_epoch, check_for_loss_plateau, store_accuracy_and_error, save_summaries, store_training_accuracy_and_loss_data, store_validation_accuracy_and_loss_data, print_flag=False, plot_flag=False, batch_size=None, canvas_from_GUI=None)[source]

Performs pretraining on a single global fragments

Parameters:

net : <ConvNetwork obejct>

an instance of the class ConvNetwork

pretraining_global_fragment : <GlobalFragment object>

an instance of the class GlobalFragment

list_of_fragments : <ListOfFragments object>

an instance of the class ListOfFragments

global_epoch : int

global counter of the training epoch in pretraining

check_for_loss_plateau : bool

if True the stopping criteria (see stop_training_criteria) will automatically stop the training in case the loss functin computed for the validation set of images reaches a plateau

store_accuracy_and_error : bool

if True the values of the loss function, accuracy and individual accuracy will be stored

save_summaries : bool

if True tensorflow summaries will be generated and stored to allow tensorboard visualisation of both loss and activity histograms

store_training_accuracy_and_loss_data : <Store_Accuracy_and_Loss object>

an instance of the class Store_Accuracy_and_Loss

store_validation_accuracy_and_loss_data : <Store_Accuracy_and_Loss object>

an instance of the class Store_Accuracy_and_Loss

print_flag : bool

if True additional information are printed in the terminal

plot_flag : bool

if True training and validation loss, accuracy and individual accuracy are plot in a graph at the end of the training session

batch_size : int

size of the batch of images used for training

canvas_from_GUI : matplotlib figure canvas

canvas of the matplotlib figure initialised in Tracker used to update the figure in the GUI visualisation of pretraining

Returns:

<ConvNetwork object>

network with updated parameters after training

float

ration of images used for pretraining over the total number of available images

int

global epoch counter updated after the training session

<Store_Accuracy_and_Loss object>

updated with the values collected on the training set of labelled images

<Store_Accuracy_and_Loss object>

updated with the values collected on the validation set of labelled images

<ListOfFragments objects>

list of instances of the class Fragment

pre_trainer.pre_trainer(old_video, video, list_of_fragments, list_of_global_fragments, pretrain_network_params)[source]

Initialises and starts the pretraining (3rd fingerprint protocol)

Parameters:

old_video : <Video object>

an instance of the class Video

video :<Video object>

an instance of the class Video

list_of_fragments : <ListOfFragments object>

an instance of the class ListOfFragments

list_of_global_fragments : <ListOfGlobalFragments object>

an instance of the class ListOfGlobalFragments

pretrain_network_params : <NetworkParams object>

an instance of the class NetworkParams