idtrackerai

Fragmentation

Fragment

A fragment is a collection of Blob objects (see Blob) that refer to the same individual. A Fragment object manages such a collection of Blob objects to facilitate the fingerprint operations and later on the identification of the animal reprpresented in the fragment.

class fragment.Fragment(fragment_identifier=None, start_end=None, blob_hierarchy_in_starting_frame=None, images=None, bounding_box_in_frame_coordinates=None, centroids=None, areas=None, pixels=None, is_an_individual=None, is_a_crossing=None, number_of_animals=None, user_generated_identity=None)[source]

Collects the Blob objects (Blob) associated to the same individual or crossing.

Attributes

fragment_identifier (int) Unique identifier of the fragment assigned with compute_fragment_identifier_and_blob_index() and compute_crossing_fragment_identifier()
start_end (tuple) (starting_frame, ending_frame) of the fragment
blob_hierarchy_in_starting_frame (int) Hierarchy of the blob in the starting frame of the fragment (the hierarchy if compute bottom-top, left-right).
images (list) List of images associated to every blob represented in the fragment and ordered according to the frame they have been segmented from
bounding_box_in_frame_coordinates (tuple) List of bounding boxes (see bounding_box_in_frame_coordinates)
centroids (list) List of centroids (see centroid)
areas (list) List of areas (see area)
pixels (list) List of pixels (see pixels)
is_an_individual (bool) True if the image has been classified as representing an individual by detect_crossings()
is_a_crossing (bool) True if the image has been classified as representing a crossing by detect_crossings()
number_of_animals (int) Number of animal to be tracked
user_generated_identity (int) Identity generated by the user during validation (if it exists, else None)
is_in_a_global_fragment (bool) True if the fragment is contained in a global fragment (see GlobalFragment)
used_for_training (bool) True if the fragment has been used to train the idCNN in the finerprinting protocol used to track the video
accumulated_globally (bool) True if the fragment’s images have been accumulated with global strategy as references during the fingerprinting protocol used to track the video
accumulated_partially (bool) True if the fragment’s images have been accumulated with partial strategy as references during the fingerprinting protocol used to track the video
accumulation_step (int) Accumulation step in which the fragment’s images have been accumulated
accumulable (bool) True if the fragment is eligible for accumulation i.e. it is contained in a global fragment object (see GlobalFragment)
used_for_pretraining (bool) True if the fragment has been used to pretrain the idCNN in the third protocol (see pre_train())
acceptable_for_training (bool) True if the fragment satisfies the conditions in check_if_is_acceptable_for_training()
frequencies (ndarray) array with shape [1, number_of_animals]. The ith component is the number of times the fragment has been assigned by the idCNN (argmax of the softmax predictions) to the identity i
P1_vector (ndarray) array with shape [1, number_of_animals] computed from frequencies by compute_identification_statistics()
P2_vector (ndarray) array with shape [1, number_of_animals] computed from the P1_vector and considering all the identified fragment overlapping (in frames) with self by compute_identification_statistics()
certainty (float) certainty of the identification of the fragment computed according to meth:compute_certainty_of_individual_fragment (depends only on the predictions of the idCNN for the fragment’s images)
certainty_P2 (float) certainty of the identification of the fragment computed according to meth:compute_P2_vector
is_certain (bool) True if certainty is greater than CERTAINTY_THRESHOLD
temporary_id (int) Identity assigned to the fragment during the fingerprint protocols cascade
identity (int) Identity assigned to the fragment
identity_is_fixed (bool) True if the certainty_P2 is greater than FIXED_IDENTITY_THRESHOLD
identity_corrected_closing_gaps (int) Identity assigned to the fragment while solving the crossing if it exists else None
user_generated_identity (int) Identity assigned to the fragment by the user during validation if it exists else None
final_identity (int) Final identity of the fragment. It corresponds to user_generated_identity if it exist otherwise to the identity assigned by the algorithm
assigned_identity (int) Identity assigned to the fragment by the algorithm
ambiguous_identities (list) List of possible identities in case the assignment is ambiguous (two or more identity can be assigned to the fragment with the same certainty)
potentially_randomly_assigned (bool) True if certainty is below random wrt the number of images in the fragment
non_consistent (bool) True if exist a fragment that overlaps in frame with self whose identity is the same as self
number_of_images (int) number of images composing the fragment
has_enough_accumulated_coexisting_fragments (bool) the partial accumulation strategy assumes that the condition of has_enough_accumulated_coexisting_fragments holds
distance_travelled (float) distance travelled by the individual in the fragment
coexisting_individual_fragments (list) List of fragment objects that coexist (in frames) with self and that represent an individual
number_of_coexisting_individual_fragments (int) Number of individual fragments coexisting with self

Methods

are_overlapping(other) Short summary.
assign_identity() Assigns the identity to self by considering the fragments coexisting with it.
check_consistency_with_coexistent_individual_fragments(…) Check that the temporary identity assigned to the fragment is
compute_P1_from_frequencies() Given the frequencies of a individual fragment computer the P1 vector.
compute_P2_vector() Computes the P2_vector related to self.
compute_border_velocity(other) Velocity necessary to cover the space between two fragments
compute_certainty_of_individual_fragment(…) Computes the certainty given the P1_vector of the fragment by
compute_identification_frequencies_individual_fragment(…) Counts the argmax of predictions per row
compute_identification_statistics(…[, …]) Computes the statistics necessary to the identification of the
compute_median_softmax(number_of_animals) Given the softmax of the predictions outputted by the network, it
frame_by_frame_velocity() Short summary.
get_attribute_of_coexisting_fragments(attribute) Retrieve a spevific attribute from the collection of fragments
get_coexisting_individual_fragments_indices(…) Get the list of fragment objects representing and individual (i.e.
get_fixed_identities_of_coexisting_fragments() Considers the fragments coexisting with self and returns their
get_missing_identities_in_coexisting_fragments(…) Returns the identities that have not been assigned to the set of fragments coexisting with self
get_neighbour_fragment(fragments, scope[, …]) If it exist, gets the fragment in the list of all fragment whose
get_possible_identities() Check if P2 has two identical maxima.
recompute_P2_of_coexisting_fragments() Updates the P2 of the fragments coexisting with self
reset([roll_back_to]) Reset attributes of self to a specific part of the algorithm
set_P1_vector_accumulated() If self has been used for training its P1_vector is modified to be
set_distance_travelled() Computes the distance travelled by the individual in the fragment.
set_partially_or_globally_accumulated(…) Sets accumulated_globally and accumulated_partially
are_overlapping(other)[source]

Short summary.

Parameters:

other : <Fragment object>

A second fragment

Returns:

bool

True if self and other coexist in at least one frame

assign_identity()[source]

Assigns the identity to self by considering the fragments coexisting with it. If the certainty of the identification is high enough it sets the identity of self to be fixed

check_consistency_with_coexistent_individual_fragments(temporary_id)[source]

Check that the temporary identity assigned to the fragment is consistent with respect to the identities already assigned to the fragments coexisting (in frame) with it

Parameters:

temporary_id : int

Temporary identity assigned to the fragment

Returns:

bool

True if the identification of self with temporary_id does not cause any duplication

static compute_P1_from_frequencies()[source]

Given the frequencies of a individual fragment computer the P1 vector. P1 is the softmax of the frequencies with base 2 for each identity.

compute_P2_vector()[source]

Computes the P2_vector related to self. It is based on coexisting_individual_fragments

compute_border_velocity(other)[source]

Velocity necessary to cover the space between two fragments

Parameters:

other : <Fragment object>

A second fragment

Returns:

float

Returns the speed at which an individual should travel to be present in both self and other Fragment objects

static compute_certainty_of_individual_fragment(median_softmax)[source]

Computes the certainty given the P1_vector of the fragment by using the output of compute_median_softmax()

Parameters:

P1_vector : ndarray

array with shape [1, number_of_animals] computed from frequencies by compute_identification_statistics()

median_softmax : ndarray

Median of argmax(softmax_probs) per image

Returns:

float

Fragment’s certainty

static compute_identification_frequencies_individual_fragment(number_of_animals)[source]

Counts the argmax of predictions per row

Parameters:

predictions : ndarray

array of shape [n, 1]

number_of_animals : int

number of animals to be tracked

Returns:

ndarray

array of shape [1, number_of_animals], whose ith component counts how many predictions have maximum components at the index i

compute_identification_statistics(predictions, softmax_probs, number_of_animals=None)[source]

Computes the statistics necessary to the identification of the fragment

Parameters:

predictions : ndarray

array of shape [number_of_images_in_fragment, 1] whose components are the argmax(softmax_probs) per image

softmax_probs : ndarray

array of shape [number_of_images_in_fragment, number_of_animals] whose rows are the result of applying the softmax function to the predictions outputted by the idCNN per image

number_of_animals : int

Description of parameter number_of_animals.

static compute_median_softmax(number_of_animals)[source]

Given the softmax of the predictions outputted by the network, it computes their median according to the argmax of the softmaxed predictions per image

Parameters:

softmax_probs : ndarray

array of shape [number_of_images_in_fragment, number_of_animals] whose rows are the result of applying the softmax function to the predictions outputted by the idCNN per image

number_of_animals : int

number of animals to be tracked

Returns:

type

Median of argmax(softmax_probs) per image

frame_by_frame_velocity()[source]

Short summary.

Returns:

ndarray

Frame by frame speed of the individual in the fragment

get_attribute_of_coexisting_fragments(attribute)[source]

Retrieve a spevific attribute from the collection of fragments coexisting (in frame) with self

Parameters:

attribute : str

attribute to retrieve

Returns:

list

attribute specified in attribute of the fragments coexisting with self

get_coexisting_individual_fragments_indices(fragments)[source]

Get the list of fragment objects representing and individual (i.e. not representing a crossing where two or more animals are touching) and coexisting (in frame) with self

Parameters:

fragments : list

List of all the fragments in the video

get_fixed_identities_of_coexisting_fragments()[source]

Considers the fragments coexisting with self and returns their identities if they are fixed (see :attr:identity_is_fixed)

Returns:

type

Description of returned object.

get_missing_identities_in_coexisting_fragments(fixed_identities)[source]

Returns the identities that have not been assigned to the set of fragments coexisting with self

Parameters:

fixed_identities : list

List of fixed identities

Returns:

list

List of missing identities in coexisting fragments

get_neighbour_fragment(fragments, scope, number_of_frames_in_direction=0)[source]

If it exist, gets the fragment in the list of all fragment whose identity is the identity assigned to self and whose starting frame is the starting frame of self + 1, or ending frame is the ending frame of self - 1

Parameters:

fragments : list

List of all the fragments in the video

scope : str

If “to_the_future” looks for the consecutive fragment wrt to self, if “to_the_past” looks for the fragment the precedes self

number_of_frames_in_direction : int

Distance (in frame) at which the previous or next fragment has to be

Returns:

<Fragment object>

The neighbouring fragment with respect to self in the direction specified by scope if it exists. Otherwise None

static get_possible_identities()[source]

Check if P2 has two identical maxima. In that case returns the indices. Else return false

recompute_P2_of_coexisting_fragments()[source]

Updates the P2 of the fragments coexisting with self (see coexisting_individual_fragments) if their identity is not fixed (see identity_is_fixed)

reset(roll_back_to=None)[source]

Reset attributes of self to a specific part of the algorithm

Parameters:

roll_back_to : str

Reset all the attributes up to the process specified in input. ‘fragmentation’, ‘pretraining’, ‘accumulation’, ‘assignment’

set_P1_vector_accumulated()[source]

If self has been used for training its P1_vector is modified to be a vector of zeros with a single component set to 1 in the temporary_id position

set_distance_travelled()[source]

Computes the distance travelled by the individual in the fragment. It is based on the position of the centroids in consecutive images. See centroid

set_partially_or_globally_accumulated(accumulation_strategy)[source]

Sets accumulated_globally and accumulated_partially according to accumulation_strategy

Parameters:

accumulation_strategy : str

Can be “global” or “partial”

List of fragments

Collection of instances of the class Fragment

class list_of_fragments.ListOfFragments(fragments)[source]

Collects all the instances of the class Fragment generated from the blobs extracted from the video during segmentation (see segmentation) after having assigned to each Blob instance a fragment identifier by using the method compute_fragment_identifier_and_blob_index()

Attributes

fragments (list) list of instances of the class Fragment
number_of_fragments (int) number of fragments computed by the method compute_fragment_identifier_and_blob_index()

Methods

compute_P2_vectors() Computes the P2_vector associated to every individual fragment.
compute_number_of_unique_images_used_for_pretraining() Returns the number of images used for pretraining
compute_number_of_unique_images_used_for_training() Returns the number of images used for training
compute_ratio_of_images_used_for_pretraining() Returns the ratio of images used for pretraining over the number of
compute_ratio_of_images_used_for_training() Returns the ratio of images used for training over the number of
compute_total_number_of_images_in_global_fragments() Sets the number of images available in global fragments (without repetitions)
create_light_list([attributes]) Creates a light version of an instance of ListOfFragments by storing
get_accumulable_individual_fragments_identifiers(…) Gets the unique identifiers associated to individual fragments that
get_data_plot() Gathers the data to plot the individual fragments’ statistics of the video.
get_fragment_identifier_to_index_list() Creates a mapping between the attribute fragments and
get_images_from_fragments_to_assign() Take all the fragments that have not been used to train the idCNN
get_new_images_and_labels_for_training() Extract images and creates labels from every individual fragment
get_next_fragment_to_identify() Returns the next fragment to be identified after the fingerprint protocols cascade by sorting according to the certainty computed with P2.
get_not_accumulable_individual_fragments_identifiers(…) Gets the unique identifiers associated to individual fragments that
get_number_of_unidentified_individual_fragments() Returns the number of individual fragments that have not been
get_ordered_list_of_fragments([scope, …]) Sorts the fragments starting from the frame number first_frame_first_global_fragment.
get_stats(list_of_global_fragments) Collects the following statistics from both fragments and global
load(path_to_load) Loads a previously saved (see load()) from the path
load_light_list(accumulation_folder) Loads a list of dictionaries created with the method
plot_stats(video) Plots the statistics obtained through get_stats()
reset([roll_back_to]) Resets all the fragment by using the method
save(fragments_path) saves an instance of ListOfFragments in the path specified by
save_light_list(accumulation_folder) Saves a list of dictionaries created with the method
set_fragments_as_accumulable_or_not_accumulable() Set the attribute accumulable
update_fragments_dictionary(list_of_dictionaries) Update fragment objects (see Fragment) by
update_from_list_of_blobs(blobs_in_video, …) Updates an instance of ListOfFragments by considering an instance of
compute_P2_vectors()[source]

Computes the P2_vector associated to every individual fragment. See compute_P2_vector()

compute_number_of_unique_images_used_for_pretraining()[source]

Returns the number of images used for pretraining (without repetitions)

Returns:

int

Number of images used in pretraining

compute_number_of_unique_images_used_for_training()[source]

Returns the number of images used for training (without repetitions)

Returns:

int

Number of images used in training

compute_ratio_of_images_used_for_pretraining()[source]

Returns the ratio of images used for pretraining over the number of available images

Returns:

float

Ratio of images used for pretraining

compute_ratio_of_images_used_for_training()[source]

Returns the ratio of images used for training over the number of available images

Returns:

float

Ratio of images used for training

compute_total_number_of_images_in_global_fragments()[source]

Sets the number of images available in global fragments (without repetitions)

create_light_list(attributes=None)[source]

Creates a light version of an instance of ListOfFragments by storing only the attributes listed in attributes in a list of dictionaries

Parameters:

attributes : list

list of attributes to be stored

Returns:

list

list of dictionaries organised per fragment with keys the attributes listed in attributes

get_accumulable_individual_fragments_identifiers(list_of_global_fragments)[source]

Gets the unique identifiers associated to individual fragments that can be accumulated

Parameters:

list_of_global_fragments : <ListOfGlobalFragments object>

Object collecting the global fragment objects (instances of the class GlobalFragment) detected in the entire video

get_data_plot()[source]

Gathers the data to plot the individual fragments’ statistics of the video.

Returns:

ndarray

array of shape [number_of_individual_fragments, 1]. Number of images in individual fragments

ndarray

array of shape [number_of_individual_fragments, 1]. Distance travelled in every individual fragment

int

Number of images in crossing fragments

get_fragment_identifier_to_index_list()[source]

Creates a mapping between the attribute fragments and their identifiers build from the ListOfBlobs

Returns:

list

Mapping from the collection of fragments to the list of fragment identifiers

get_images_from_fragments_to_assign()[source]

Take all the fragments that have not been used to train the idCNN and that are associated with an individual, and concatenates their images in order to feed them to the idCNN and get predictions

Returns:

ndarray

[number_of_images, height, width, number_of_channels]

get_new_images_and_labels_for_training()[source]

Extract images and creates labels from every individual fragment that has not been used to train the network during the fingerprint protocols cascade

Returns:

list

images

list

labels

get_next_fragment_to_identify()[source]

Returns the next fragment to be identified after the fingerprint protocols cascade by sorting according to the certainty computed with P2. See certainty_P2

Returns:

<Fragment object>

an instance of the class Fragment

get_not_accumulable_individual_fragments_identifiers(list_of_global_fragments)[source]

Gets the unique identifiers associated to individual fragments that cannot be accumulated

Parameters:

list_of_global_fragments : <ListOfGlobalFragments object>

Object collecting the global fragment objects (instances of the class GlobalFragment) detected in the entire video

get_number_of_unidentified_individual_fragments()[source]

Returns the number of individual fragments that have not been identified during the fingerprint protocols cascade

Returns:

int

number of non-identified individual fragments

get_ordered_list_of_fragments(scope=None, first_frame_first_global_fragment=None)[source]

Sorts the fragments starting from the frame number first_frame_first_global_fragment. According to scope the sorting is done either “to the future” of “to the past” with respect to first_frame_first_global_fragment

Parameters:

scope : str

either “to_the_past” or “to_the_future”

first_frame_first_global_fragment : int

frame number corresponding to the first frame in which all the individual fragments coexist in the first global fragment using in an iteration of the fingerprint protocol cascade

Returns:

list

list of sorted fragments

get_stats(list_of_global_fragments)[source]

Collects the following statistics from both fragments and global fragments:

*number_of_fragments *number_of_crossing_fragments *number_of_individual_fragments *number_of_individual_fragments_not_in_a_global_fragment *number_of_accumulable_individual_fragments *number_of_not_accumulable_individual_fragments *number_of_accumualted_individual_fragments *number_of_globally_accumulated_individual_fragments *number_of_partially_accumulated_individual_fragments *number_of_blobs *number_of_crossing_blobs *number_of_individual_blobs *number_of_individual_blobs_not_in_a_global_fragment *number_of_accumulable_individual_blobs *number_of_not_accumulable_individual_blobs *number_of_accumualted_individual_blobs *number_of_globally_accumulated_individual_blobs *number_of_partially_accumulated_individual_blobs

Parameters:

list_of_global_fragments : <ListOfGlobalFragments object>

Object collecting the global fragment objects (instances of the class GlobalFragment) detected in the entire video

Returns:

dict

dictionary of statistics

classmethod load(path_to_load)[source]

Loads a previously saved (see load()) from the path path_to_load

load_light_list(accumulation_folder)[source]

Loads a list of dictionaries created with the method create_light_list() and saved with save_light_list() from the folder accumulation_folder

plot_stats(video)[source]

Plots the statistics obtained through get_stats()

Parameters:

video : <Video object>

See Video

reset(roll_back_to=None)[source]

Resets all the fragment by using the method roll_back_to()

save(fragments_path)[source]

saves an instance of ListOfFragments in the path specified by fragments_path

save_light_list(accumulation_folder)[source]

Saves a list of dictionaries created with the method create_light_list() in the folder accumulation_folder

set_fragments_as_accumulable_or_not_accumulable()[source]

Set the attribute accumulable

update_fragments_dictionary(list_of_dictionaries)[source]

Update fragment objects (see Fragment) by considering a list of dictionaries

update_from_list_of_blobs(blobs_in_video, fragment_identifier_to_index)[source]

Updates an instance of ListOfFragments by considering an instance of ListOfBlobs (see ListOfBlobs)

Parameters:

blobs_in_video : list

list of the blob objects (see class Blob) generated from the blobs segmented in the video

fragment_identifier_to_index : list

Mapping from the collection of fragments to the list of fragment identifiers

list_of_fragments.create_list_of_fragments(blobs_in_video, number_of_animals)[source]

Generate a list of instances of Fragment collecting all the fragments in the video.

Parameters:

blobs_in_video : list

list of the blob objects (see class Blob) generated from the blobs segmented in the video

number_of_animals : int

Number of animals to track

Returns:

list

list of instances of Fragment

Global fragment

Global fragments are the core of the tracking algorithm: They are collection of instances of the class Fragment that contains images extracted from a part of the video in which all the animals are visible.

class globalfragment.GlobalFragment(blobs_in_video, fragments, index_beginning_of_fragment, number_of_animals)[source]

A global fragment is a collection of instances of the class Fragment. Such fragments are collected from a part of the video in which all animals are visible.

Attributes

index_beginning_of_fragment (int) minimum frame number in which all the individual fragments (see Fragment) are all coexisting
individual_fragments_identifiers (list) list of the fragment identifiers associated to the individual fragments composing the global fragment (see Fragment)
number_of_animals (int) number of animals to be tracked
_is_unique (bool) True if each of the individual fragments have been assigned to a unique identity
_is_certain (bool) True if each of the individual fragments have scored a certaninty above the threshold CERTAINTY_THRESHOLD
_ids_assigned (ndarray) shape [1, number_of_animals] each componenents correspond to the identity assigned by the algorithm to each of the individual fragments
_temporary_ids (ndarray) shape [1, number_of_animals] temporary ids assigned during the fingerprinting protocol to each of the fragments composing the global fragment
_repeated_ids (list) list of identities repeated during the identification of the fragments in the individual fragment
_missing_ids (list) list of identities not assigned in the global fragment (since in a global fragment all animals are visible, all the identities should be assigned)
predictions (list) list of predictions for every individual fragment
softmax_probs_median (list) list of softmax_probs_median for every individual fragment

Methods

acceptable_for_training(accumulation_strategy) Returns True if the global fragment is acceptable for training.
check_uniqueness(scope) Checks that the identities assigned to the individual fragments are
compute_start_end_frame_indices_of_individual_fragments(…)
Parameters:
get_images_and_labels_for_pretraining() Arrange the images and identities in the global fragment as a
get_individual_fragments_of_global_fragment(…) Get the individual fragments in the global fragments by using their
get_list_of_attributes_from_individual_fragments(…) Given a set of attributes available in the instances of the class Fragment it copies them in the global fragment.
get_total_number_of_images() Gets the total number of images in the global fragment
reset(roll_back_to) Resets attributes to the fragmentation step in the algorithm,
set_candidate_for_accumulation() Sets the global fragment to be eligible for accumulation
set_minimum_distance_travelled() Sets the minum distance travelled attribute
update_individual_fragments_attribute(…) Update attribute in every individual fragment in the global
acceptable_for_training(accumulation_strategy)[source]

Returns True if the global fragment is acceptable for training. See acceptable_for_training for every individual fragment in the global fragment

Parameters:

accumulation_strategy : str

can be either “global” or “partial”

Returns:

bool

True if the global fragment is accceptable for training

check_uniqueness(scope)[source]

Checks that the identities assigned to the individual fragments are unique

Parameters:

scope : str

Either “global” or “partial”

compute_start_end_frame_indices_of_individual_fragments(blobs_in_video)[source]
Parameters:

blobs_in_video : list

list of the blob objects (see Blob) segmented from the video

get_images_and_labels_for_pretraining()[source]

Arrange the images and identities in the global fragment as a labelled dataset in order to train the idCNN

get_individual_fragments_of_global_fragment(fragments)[source]

Get the individual fragments in the global fragments by using their unique identifiers

Parameters:

fragments : list

all the fragments extracted from the video (see Fragment)

get_list_of_attributes_from_individual_fragments(fragments, list_of_attributes=['distance_travelled', 'number_of_images'])[source]

Given a set of attributes available in the instances of the class Fragment it copies them in the global fragment. For instance the attribute number_of_images belonging to the individual fragments in the global fragment will be set as global_fragment.number_of_images_per_individual_fragment, where each element of the list corresponds to the number_of_images of each individual fragment preserving the order with which the global fragment has been initialised

Parameters:

fragments : <Fragment object>

list_of_attributes : list

List of attributes to be transferred from the individual fragments to the global fragment

get_total_number_of_images()[source]

Gets the total number of images in the global fragment

reset(roll_back_to)[source]

Resets attributes to the fragmentation step in the algorithm, allowing for example to start a new accumulation

Parameters:

roll_back_to : str

“fragmentation”

set_candidate_for_accumulation()[source]

Sets the global fragment to be eligible for accumulation

set_minimum_distance_travelled()[source]

Sets the minum distance travelled attribute

update_individual_fragments_attribute(attribute, value)[source]

Update attribute in every individual fragment in the global fragment by setting it at value

Parameters:

attribute : str

attribute to be updated

value : list, int, float

value of attribute

List of global fragments

Collection of instances of the class GlobalFragment. Global fragments are used to create the dataset of labelled images used to train the idCNN during the fingerprint protocols cascade.