idtrackerai

Identification network

idCNN

This module contains the main Tensorflow operations needed to train and the idCNN

Network parameters

class network_params.NetworkParams(number_of_animals, cnn_model=0, learning_rate=None, keep_prob=None, use_adam_optimiser=False, scopes_layers_to_optimize=None, restore_folder=None, save_folder=None, knowledge_transfer_folder=None, image_size=None, number_of_channels=None, video_path=None)[source]

Manages the network hyperparameters and other variables related to the identification model (see idCNN)

Attributes

video_path (string) Path to the video file
number_of_animals (int) Number of animals in the video
learning_rate (float) Learning rate for the optimizer
keep_prob (float) Dropout probability
_restore_folder (string) Path to the folder where the model to be restored is
_save_folder (string) Path to the folder where the checkpoints of the current model are stored
_knowledge_transfer_folder (string) Path to the folder where the model to be used for knowledge transfer is saved
use_adam_optimiser (bool) Flag indicating to use the Adam optimizer with the parameters indicated in _[2]
scopes_layers_to_optimize (list) List with the scope names of the layers to be optimized
_cnn_model (int) Number indicating the model number to be used from the dictionary of models CNN_MODELS_DICT in id_CNN
image_size (tuple) Tuple (height, width, channels) for the input images
number_of_channels (int) Number of channels of the input image
.. [2] Kingma, Diederik P., and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014).  

Stop training criteria (idCNN)

class stop_training_criteria.Stop_Training(number_of_animals, epochs_before_checking_stopping_conditions=10, check_for_loss_plateau=True, first_accumulation_flag=False)[source]

Stops the training of the network according to the conditions specified in __call__()

Attributes

num_epochs (int) Number of epochs before starting the training
number_of_animals (int) Number of animals in the video
epochs_before_checking_stopping_conditions (int) Number of epochs before starting to check the stopping conditions
overfitting_counter (int) Counts the number of consecutive overfitting epochs during training
check_for_loss_plateau (bool) Flag to check for a plateu in the loss of the validation
first_accumulation_flag (bool) Flag to indicate that it is the first step of the accumulation

Methods

__call__(loss_accuracy_training, …) Returns True when one of the conditions to stop the training is satisfied,

Store accurcay and loss

class store_accuracy_and_loss.Store_Accuracy_and_Loss(network, name, scope=None)[source]

Store the loss, accuracy and individual accuracy values computed during training and validation

Parameters:

_path_to_accuracy_error_data :

Path to save the lists loss, accuracy, individual_accuracy

name : string

‘Training’ or ‘Validation’

loss : list

List with the values of the loss

accuracy : list

List with the values of the accuracy

individual_accuracy : list

List with the values of the individual accuracies

number_of_epochs_completed :

Number of epochs completed

scope : string

‘Training’ if the class is instantiated during the training of the accumulation. ‘Pretraining’ if the class is instantiated during the training

Methods

append_data(loss_value, accuracy_value, …) Appends the loss_value, accuracy_value and individual_accuracy_value
load() Load the values stored in case there are any saved
plot([axes_handles, index, color, …]) Plots the accuracy and the individual accuracies for every epoch
plot_global_fragments(ax_handles, video, …) Plots the global fragments used for training until the current epoch
save(number_of_epochs_completed) Saves the values stored
append_data(loss_value, accuracy_value, individual_accuracy_value)[source]

Appends the loss_value, accuracy_value and individual_accuracy_value to their correspoding lists

load()[source]

Load the values stored in case there are any saved

plot(axes_handles=None, index=0, color='r', canvas_from_GUI=None, legend_font_color=None)[source]

Plots the accuracy and the individual accuracies for every epoch

plot_global_fragments(ax_handles, video, fragments, black=False, canvas_from_GUI=None)[source]

Plots the global fragments used for training until the current epoch

save(number_of_epochs_completed)[source]

Saves the values stored

Get data

class get_data.DataSet(number_of_animals=None, images=None, labels=None)[source]

Contains the images and labels to be used for training a particular model

Attributes

images (ndarray) Array of shape [num_images, height, width, channels] containing the images of the dataset
num_images (int) Number of images in the dataset
labels (ndarray) Array of shape [num_images, number_of_classes] containing the labels corresponding to the images in the dataset
number_of_animals (int) Number of classes in the dataset

Methods

consistency_check() Checks that the length of images and labels is the same
convert_labels_to_one_hot() Converts labels from dense format to one hot format
consistency_check()[source]

Checks that the length of images and labels is the same

convert_labels_to_one_hot()[source]

Converts labels from dense format to one hot format See Also ——– shuffle_images_and_labels()

get_data.dense_to_one_hot(labels, n_classes=2)[source]

Convert class labels from scalars to one-hot vectors.

get_data.duplicate_PCA_images(training_images, training_labels)[source]

Creates a copy of every image in training_images by rotating 180 degrees

Parameters:

training_images : ndarray

Array of shape [number of images, height, width, channels] containing the images to be rotated

training_labels : ndarray

Array of shape [number of images, 1] containing the labels corresponding to the training_images

Returns:

training_images : ndarray

Array of shape [2*number of images, height, width, channels] containing the original images and the images rotated

training_labels : ndarray

Array of shape [2*number of images, 1] containing the labels corresponding to the original images and the images rotated

get_data.shuffle_images_and_labels(images, labels)[source]

Shuffles images and labels with a random permutation, according to the number of examples

get_data.split_data_train_and_validation(number_of_animals, images, labels, validation_proportion=0.1)[source]

Splits a set of images and labels into training and validation sets

Parameters:

number_of_animals : int

Number of classes in the set of images

images : list

List of images (arrays of shape [height, width])

labels : list

List of integers from 0 to number_of_animals - 1

validation_proportion : float

The proportion of images that will be used to create the validation set.

Returns:

training_dataset : <DataSet object>

Object containing the images and labels for training

validation_dataset : <DataSet object>

Object containing the images and labels for validation

Epoch runner

Computes a given operation for an entire epoch divided in batches

Get predictions

Retrieves the predictions of the model idCNN (ConvNetwork) for a set of images