idtrackerai

Preprocessing

Blob

class blob.Blob(centroid, contour, area, bounding_box_in_frame_coordinates, bounding_box_image=None, estimated_body_length=None, pixels=None, number_of_animals=None, frame_number=None)[source]

Object representing a blob (collection of pixels) segmented from a frame

Attributes

frame_number (int) Object containing all the parameters of the video.
number_of_animals (int) Number of animals to be tracked
centroid (tuple) Centroid as (x,y) in frame coordinate
contour: list List of tuples (x,y) in frame coordinate of the pixels on the boundary of the blob
area (int) Number of pixels composing the blob
bounding_box_in_frame_coordinates: list List of tuples [(x, y), (x + width, y + height)] of the bounding box rectangle enclosing the blob’s pixels
bounding_box_image: ndarray Image obtained by clipping the frame to the bounding box
estimated_body_length: float Body length estimated from the bounding box
_image_for_identification: ndarray Image fed to the network to get the identity of the animal the blob is associated to
pixels (list) List of ravelled pixels belonging to the blob (wrt the full frame)
_is_an_individual (bool) If True the blob is associated to a single animal
_is_a_crossing (bool) If True the blob is associated to two or more animals touching
_was_a_crossing (bool) If True the blob has been generated by splitting a crossing in postprocessing
_is_a_misclassified_individual (bool) This property can be modified only by the user during validation. It identifies a blob that was by mistaken associated to a crossing by the DeepCrossingDetector
next (list) List of blob objects segmented in self.frame_number + 1 whose list of pixels intersect self.pixels
previous (list) List of blob objects segmented in self.frame_number - 1 whose list of pixels intersect self.pixels
_fragment_identifier (int) Unique integer identifying the fragment (built by blob overlapping) to which self belongs to
_blob_index (int) Hierarchy of the blob
_used_for_training (bool) If True the image obtained from the blob has been used to train the idCNN
_accumulation_step (int) Accumulation step in which the image associated to the blob has been accumulated
_generated_while_closing_the_gap (bool) If True the blob has been generated while solving the crossings
_user_generated_identity (int) The identity corrected manually by the user during validation
_identity_corrected_closing_gaps (int) The identity given to the blob during in postprocessing
_identity_corrected_solving_duplication (int) The identity given to the blob while solving duplications
_identity (int) Identity associated to the blob
is_identified (bool) True if self.identity is not None
final_identity (int) Identity assigned to self after validation
assigned_identity (int) Identity assigned to self by the algorithm (ignoring eventual correction made by the user during validation)
has_ambiguous_identity: bool True if during either accumulation of residual identification the blob has been associated with equal probability to two (or more) distinct identities
nose_coordinates (tuple) Coordinate of the nose of the blob (only for zebrafish)
head_coordinates (tuple) Coordinate of the centroid of the head of the blob (only for zebrafish)
extreme1_coordinate (tuple)
extreme2_coordinates (tuple)

Methods

apply_model_area(video, number_of_animals, …) Classify self as a crossing or individual blob according to its area
check_for_multiple_next_or_previous([direction]) Return True if self has multiple blobs in its past or future overlapping history of the blob
compute_overlapping_with_previous_blob()
distance_from_countour_to(point) Returns the distance between the point passed as input and the closes point belonging to the contour of the blob.
get_image_for_identification(video[, …]) Compute the image that will be used to identify the animal with the idCNN
get_nose_and_head_coordinates() Only for zebrafish: Compute the nose coordinate according to [R2]
in_a_global_fragment_core(blobs_in_frame) A blob in a frame is in the core of a global fragment if in that frame there are as many blobs as number of animals to track
is_a_sure_crossing() A blob marked as a sure crossing will be used to train the Deep Crossing Detector (the artificial neural network used to disriminate images associated with individual from ones associated to crossings).
is_a_sure_individual() A blob marked as a sure individual will be used to train the Deep Crossing Detector (the artificial neural network used to disriminate images associated with individual from ones associated to crossings).
now_points_to(other) Given two consecutive blob objects updates their respective overlapping histories
overlaps_with(other) Given a second blob object, checks if the lists of pixels of the two blobs intersect
set_image_for_identification(video) Set the image that will be used to identitfy the animal with the idCNN
squared_distance_to(other) Returns the squared distance from the centroid of self to the centroid of other
apply_model_area(video, number_of_animals, model_area, identification_image_size, number_of_blobs)[source]

Classify self as a crossing or individual blob according to its area

Parameters:

video : <Video object>

Object containing all the parameters of the video.

number_of_animals : int

Number of animals to be tracked

model_area : function

Model of the area blobs representing individual animals

identification_image_size : tuple

Shape of the images used for the identification

number_of_blobs : int

Number of blobs segmented in the frame self.frame_number

check_for_multiple_next_or_previous(direction=None)[source]

Return True if self has multiple blobs in its past or future overlapping history of the blob

Parameters:

direction : str

“previous” or “next”. If “previous” the past overlapping history will be checked in order to find out if the blob will split in the past. Symmetrically, if “next” the future overlapping history of the blob will be checked

Returns:

Bool

If True the blob splits into two or multiple overlapping blobs in its “past” or “future” history, depending on the parameter “direction”

distance_from_countour_to(point)[source]

Returns the distance between the point passed as input and the closes point belonging to the contour of the blob.

Parameters:

point : tuple

(x,y)

Returns:

float

\(\min_{c\in \mbox{ blob.contour}}(d(c, point))\), where \(d\) is the Euclidean distance.

get_image_for_identification(video, folder_to_save_for_paper_figure='', image_size=None)[source]

Compute the image that will be used to identify the animal with the idCNN

Parameters:

video : <Video object>

Object containing all the parameters of the video.

folder_to_save_for_paper_figure : str

Path to the folder in which the image will be saved. If ‘’ the image will not be saved.

image_size : tuple

Shape of the image

Returns:

ndarray

Stadardised image

get_nose_and_head_coordinates()[source]

Only for zebrafish: Compute the nose coordinate according to [R1]

[R1]Wang, Shuo Hong, et al. “Automated planar tracking the waving bodies of multiple zebrafish swimming in shallow water.” PloS one 11.4 (2016): e0154714.
in_a_global_fragment_core(blobs_in_frame)[source]

A blob in a frame is in the core of a global fragment if in that frame there are as many blobs as number of animals to track

Parameters:

blobs_in_frame : list

List of Blob objects representing the animals segmented in the frame self.frame_number

Returns:

Bool

True if the blob is in the core of a global fragment

is_a_sure_crossing()[source]

A blob marked as a sure crossing will be used to train the Deep Crossing Detector (the artificial neural network used to disriminate images associated with individual from ones associated to crossings).

Returns:

Bool

Blob is a sure crossing if:

  • it overlaps with one and only one blob in both the immediate past and future frames;
  • it splits in both its past and future overlapping history
is_a_sure_individual()[source]

A blob marked as a sure individual will be used to train the Deep Crossing Detector (the artificial neural network used to disriminate images associated with individual from ones associated to crossings).

Returns:

Bool

Blob is a sure individual if:

  • it overlaps with one and only one blob in both the immediate past and future frames;
  • it never splits in both its past and future overlapping history
now_points_to(other)[source]

Given two consecutive blob objects updates their respective overlapping histories

Parameters:

other : <Blob object>

An instance of the class Blob

overlaps_with(other)[source]

Given a second blob object, checks if the lists of pixels of the two blobs intersect

Parameters:

other : <Blob object>

An instance of the class Blob

Returns:

Bool

True if the lists of pixels have non-empty intersection

set_image_for_identification(video)[source]

Set the image that will be used to identitfy the animal with the idCNN

Parameters:

video : <Video object>

Object containing all the parameters of the video.

squared_distance_to(other)[source]

Returns the squared distance from the centroid of self to the centroid of other

Parameters:

other : <Blob object> or tuple

An instance of the class Blob or a tuple (x,y)

Returns:

float

Squared distance between centroids

blob.full2miniframe(point, boundingBox)[source]

Maps a point in the fullframe to the coordinate system defined by the image generated by considering the bounding box of the blob. Here it is use for centroids

Parameters:

point : tuple

(x, y)

boundingBox : list

[(x, y), (x + bounding_box_width, y + bounding_box_height)]

Returns:

tuple

\((x^\prime, y^\prime)\)

blob.remove_background_pixels(height, width, bounding_box_image, pixels, bounding_box_in_frame_coordinates, folder_to_save_for_paper_figure)[source]

Removes the background pixels substiuting them with a homogeneous black background.

Parameters:

height : int

Frame height

width : int

Frame width

bounding_box_image : ndarray

Images cropped from the frame by considering the bounding box associated to a blob

pixels : list

List of pixels associated to a blob

bounding_box_in_frame_coordinates : list

[(x, y), (x + bounding_box_width, y + bounding_box_height)]

identification_image_size : tuple

shape of the identification image

folder_to_save_for_paper_figure : str

folder to save the images for identification

Returns:

ndarray

Image with black background pixels

List of Blobs

Collection of instances of the class Blob generated by considering all the blobs segmented from the video.

class list_of_blobs.ListOfBlobs(blobs_in_video=None, number_of_frames=None)[source]

Collects all the instances of the class Blob generated from the blobs extracted from the video during segmentation (see segmentation)

Attributes

blobs_in_video (list) List of instances of Blob segmented from the video and organised framewise
number_of_frames (int) number of frames in the video
blobs_are_connected :bool True if the blobs have already being organised in fragments (see Fragment). False otherwise

Methods

apply_model_area_to_video(video, model_area, …) Applies model_area to every blob extracted from video
check_maximal_number_of_blob(number_of_animals) Checks that the amount of blobs per frame is not greater than the number of animals to track
compute_crossing_fragment_identifier() Assign a unique identifier to fragments associated to a crossing
compute_fragment_identifier_and_blob_index(…) Associates a unique fragment identifier to the fragments computed by overlapping see method compute_overlapping_between_subsequent_frames() and sets the blob index as the hierarchy of the blob in the first frame of the fragment to which blob belongs to.
compute_model_area_and_body_length(…) computes the median and standard deviation of all the blobs of the video and the median_body_length estimated from the diagonal of the bounding box.
compute_nose_and_head_coordinates() Computes nose and head coordinate for all the blobs segmented from the video
compute_overlapping_between_subsequent_frames() Computes overlapping between self and the blobs generated during segmentation in the next frames
connect() Connects blobs in subsequent frames by computing their overlapping
disconnect() Reinitialise the previous and next list of blobs (see next and previous).
erode(video) Erodes all the blobs in the video
get_data_plot() Gets the areas of all the blobs segmented in the video
load(video, path_to_load_blob_list_file) Short summary.
reconnect() Connects blobs in subsequent frames by computing their overlapping and sets blobs_are_connected to True
save(video[, path_to_save, number_of_chunks]) save instance
update_from_list_of_fragments(fragments, …) Updates the blobs objects generated from the video with the attributes computed for each fragment
apply_model_area_to_video(video, model_area, identification_image_size, number_of_animals)[source]

Applies model_area to every blob extracted from video

Parameters:

video : <Video object>

See Video

model_area : <ModelArea object>

identification_image_size : int

size of the identification image (see get_image_for_identification() and image_for_identification)

number_of_animals : int

number of animals to be tracked

check_maximal_number_of_blob(number_of_animals, return_maximum_number_of_blobs=False)[source]

Checks that the amount of blobs per frame is not greater than the number of animals to track

Parameters:

number_of_animals : int

number of animals to be tracked

Returns:

list

List of indices of frames in which more blobs than animals to track have been segmented

compute_crossing_fragment_identifier()[source]

Assign a unique identifier to fragments associated to a crossing

compute_fragment_identifier_and_blob_index(number_of_animals)[source]

Associates a unique fragment identifier to the fragments computed by overlapping see method compute_overlapping_between_subsequent_frames() and sets the blob index as the hierarchy of the blob in the first frame of the fragment to which blob belongs to.

Parameters:

number_of_animals : int

number of animals to be tracked

compute_model_area_and_body_length(number_of_animals)[source]

computes the median and standard deviation of all the blobs of the video and the median_body_length estimated from the diagonal of the bounding box. These values are later used to discard blobs that are not fish and potentially belong to a crossing.

compute_nose_and_head_coordinates()[source]

Computes nose and head coordinate for all the blobs segmented from the video

compute_overlapping_between_subsequent_frames()[source]

Computes overlapping between self and the blobs generated during segmentation in the next frames

connect()[source]

Connects blobs in subsequent frames by computing their overlapping

disconnect()[source]

Reinitialise the previous and next list of blobs (see next and previous)

erode(video)[source]

Erodes all the blobs in the video

Parameters:

video : <Video object>

see Video

get_data_plot()[source]

Gets the areas of all the blobs segmented in the video

Returns:

list

Areas of all the blobs segmented in the video

classmethod load(video, path_to_load_blob_list_file)[source]

Short summary.

Parameters:

video : <Video object>

See Video

path_to_load_blob_list_file : str

path to load a list of blobs

Returns:

<ListOfBlobs object>

an instance of ListOfBlobs

reconnect()[source]

Connects blobs in subsequent frames by computing their overlapping and sets blobs_are_connected to True

save(video, path_to_save=None, number_of_chunks=1)[source]

save instance

update_from_list_of_fragments(fragments, fragment_identifier_to_index)[source]

Updates the blobs objects generated from the video with the attributes computed for each fragment

Parameters:

fragments : list

List of all the fragments

fragment_identifier_to_index : int

index to retrieve the fragment corresponding to a certain fragment identifier (see compute_fragment_identifier_and_blob_index())

Model area

Allows to apply a model of the area of the indiviudals to be tracked to all the blobs collected during the segmentation process (see segmentation)

class model_area.ModelArea(mean, median, std)[source]

Model of the area used to perform a first discrimination between blobs representing single individual and multiple touching animals (crossings)

Attributes

median (float) median of the area of the blobs segmented from portions of the video in which all the animals are visible (not touching)
mean (float) mean of the area of the blobs segmented from portions of the video in which all the animals are visible (not touching)
std (float) standard deviation of the area of the blobs segmented from portions of the video in which all the animals are visible (not touching)
std_tolerance (int) tolerance factor

Methods

__call__(…) <==> x(…)

Segmentation

segmentation.get_blobs_in_frame(cap, video, segmentation_thresholds, max_number_of_blobs, frame_number)[source]

Segments a frame read from cap according to the preprocessing parameters in video. Returns a list blobs_in_frame with the Blob objects in the frame and the max_number_of_blobs found in the video so far. Frames are segmented in gray scale.

Parameters:

cap : <VideoCapture object>

OpenCV object used to read the frames of the video

video : <Video object>

Object collecting all the parameters of the video and paths for saving and loading

segmentation_thresholds : dict

Dictionary with the thresholds used for the segmentation: min_threshold, max_threshold, min_area, max_area

max_number_of_blobs : int

Maximum number of blobs found in the whole video so far in the segmentation process

frame_number : int

Number of the frame being segmented. It is used to print in the terminal the frames where the segmentation fails

Returns:

blobs_in_frame : list

List of <Blob object> segmented in the current frame

max_number_of_blobs : int

Maximum number of blobs found in the whole video so far in the segmentation process

See also

Video, Blob, segment_frame, blob_extractor

segmentation.get_videoCapture(video, path, episode_start_end_frames)[source]

Gives the VideoCapture (OpenCV) object to read the frames for the segmentation and the number of frames to read. If episode_start_end_frames is None then a path must be given as the video is assumed to be splitted in different files (episodes). If the path is None the video is assumed to be in a single file and the path is read from video, then episode_start_end_frames must be give.

Parameters:

video : <Video object>

Object collecting all the parameters of the video and paths for saving and loading

path : string

Path to the video file from where to get the VideoCapture (OpenCV) object

episode_start_end_frames : tuple

Tuple (starting_frame, ending_frame) indicanting the start and end of the episode when the video is given in a single file

Returns:

cap : <VideoCapture object>

OpenCV object used to read the frames of the video

number_of_frames_in_episode : int

Number of frames in the episode of video being segmented

segmentation.resegment(video, frame_number, list_of_blobs, new_segmentation_thresholds)[source]

Updates the list_of_blobs for a particular frame_number by performing a segmentation with new_segmentation_thresholds. This function is called for the frames in which the number of blobs is higher than the number of animals stated by the user.

Parameters:

video : <Video object>

Object collecting all the parameters of the video and paths for saving and loading

frame_number : int

Number of the frame to update with the new segmentation

list_of_blobs : <ListOfBlobs object>

Object containing the list of blobs segmented in the video in each frame

new_segmentation_thresholds : dict

Dictionary with the thresholds used for the new segmentation: min_threshold, max_threshold, min_area, max_area

Returns:

number_of_blobs_in_frame : int

Number of blobs found in the frame

segmentation.segment(video)[source]

Segment the video giving a list of blobs for every frame in the video. If a video is given as a set of files (episodes), those files are used to parallelise the segmentation process. If a video is given in a single file the list of indices video.episodes_start_end is used for the parallelisation

Parameters:

video : <Video object>

Object collecting all the parameters of the video and paths for saving and loading

Returns:

blobs_in_video : list

List of blobs_in_frame for all the frames of the video

See also

segment_episode

segmentation.segment_episode(video, segmentation_thresholds, path=None, episode_start_end_frames=None)[source]

Gets list of blobs segmented in every frame of the episode of the video given by path (if the video is splitted in different files) or by episode_start_end_frames (if the video is given in a single file)

Parameters:

video : <Video object>

Object collecting all the parameters of the video and paths for saving and loading

segmentation_thresholds : dict

Dictionary with the thresholds used for the segmentation: min_threshold, max_threshold, min_area, max_area

path : string

Path to the video file from where to get the VideoCapture (OpenCV) object

episode_start_end_frames : tuple

Tuple (starting_frame, ending_frame) indicanting the start and end of the episode when the video is given in a single file

Returns:

blobs_in_episode : list

List of blobs_in_frame of the episode of the video being segmented

max_number_of_blobs : int

Maximum number of blobs found in the episode of the video being segmented

See also

Video, Blob, get_videoCapture, segment_frame, blob_extractor

Segmentation utils

video_utils.blob_extractor(segmented_frame, frame, min_area, max_area)[source]

Given a segmented_frame it extracts the blobs with area greater than min_area and smaller than max_area and it computes a set of relevant properties for every blob.

Parameters:

segmented_frame : nd.array

Frame with zeros and ones where ones are valid pixels.

frame : nd.array

Frame from where to extract the bounding_box_image of every blob

min_area : int

Minimum number of blobs above which a blob is considered to be valid

max_area : int

Maximum number of blobs below which a blob is considered to be valid

Returns:

bounding_boxes : list

List with the bounding_box for every contour in contours

bounding_box_images : list

List with the bounding_box_image for every contour in contours

centroids : list

List with the centroid for every contour in contours

areas : list

List with the area in pixels for every contour in contours

pixels : list

List with the pixels for every contour in contours

good_contours_in_full_frame:

List with the contours which area is in between min_area and max_area

estimated_body_lengths : list

List with the estimated_body_length for every contour in contours

video_utils.check_background_substraction(video, old_video, use_previous_background)[source]

Checks whether background substraction must be used and in that case it checks whether a previous computed background can be used. Otherwise it computes the background model from scracth or it returns None if background substraction must not be used.

Parameters:

video : <Video object>

Object collecting all the parameters of the video and paths for saving and loading

old_video : <Video object>

Same object as video but with the information of a previous tracking session

use_previous_background : bool

Indicates whether the background computed in a previous session must be used or not

Returns:

bkg : nd.array

Array with the model of the background.

video_utils.cnt2BoundingBox(cnt, bounding_box)[source]

Transforms the coordinates of the contour in the full frame to the the bounding box of the blob.

Parameters:

cnt : list

List of the coordinates that defines the contour of the blob in the full frame of the video

bounding_box : tuple

Tuple with the coordinates of the bounding box (x, y),(x + w, y + h))

Returns:

contour_in_bounding_box : nd.array

Array with the pairs of coordinates of the contour in the bounding box

video_utils.cumpute_background(video)[source]

Computes a model of the background by averaging multiple frames of the video. In particular 1 every 100 frames is used for the computation.

Parameters:

video : <Video object>

Object collecting all the parameters of the video and paths for saving and loading

Returns:

bkg : nd.array

Array with the model of the background.

video_utils.filter_contours_by_area(contours, min_area, max_area)[source]

Filters out contours which number of pixels is smaller than min_area or greater than max_area

Parameters:

contours : list

List of OpenCV contours

min_area : int

Minimum number of pixels for a contour to be acceptable

max_area : int

Maximum number of pixels for a contours to be acceptable

Returns:

good_contours : list

List of OpenCV contours that fulfill both area thresholds

video_utils.getCentroid(cnt)[source]

Computes the centroid of the contour

Parameters:

cnt : list

List of the coordinates that defines the contour of the blob in the full frame of the video

Returns:

centroid : tuple

(x,y) coordinates of the center of mass of the contour.

video_utils.get_blobs_information_per_frame(frame, contours)[source]

Computes a set of properties for all the contours in a given frame.

Parameters:

frame : nd.array

Frame from where to extract the bounding_box_image of every contour

contours : list

List of OpenCV contours for which to compute the set of properties

Returns:

bounding_boxes : list

List with the bounding_box for every contour in contours

bounding_box_images : list

List with the bounding_box_image for every contour in contours

centroids : list

List with the centroid for every contour in contours

areas : list

List with the area in pixels for every contour in contours

pixels : list

List with the pixels for every contour in contours

estimated_body_lengths : list

List with the estimated_body_length for every contour in contours

video_utils.get_bounding_box(cnt, width, height, crossing_detector=False)[source]

Computes the bounding box of a given contour with an extra margin of 45 pixels. If the function is called with the crossing_detector set to True the margin of the bounding box is set to 55.

Parameters:

cnt : list

List of the coordinates that defines the contour of the blob in the full frame of the video

width : int

Width of the video frame

height : int

Height of the video frame

crossing_detector : bool

Flag to indicate whether the function is being called from the crossing_detector module

Returns:

bounding_box : tuple

Tuple with the coordinates of the bounding box (x, y),(x + w, y + h))

original_diagonal : int

Diagonal of the original bounding box computed with OpenCv that serves as an estimate for the body length of the animal.

video_utils.get_bounding_box_image(frame, cnt)[source]

Computes the bounding_box_image`from a given frame and contour. It also returns the coordinates of the `bounding_box, the ravelled pixels inside of the contour and the diagonal of the bounding_box as an estimated_body_length

Parameters:

frame : nd.array

frame from where to extract the bounding_box_image

cnt : list

List of the coordinates that defines the contour of the blob in the full frame of the video

Returns:

bounding_box : tuple

Tuple with the coordinates of the bounding box (x, y),(x + w, y + h))

bounding_box_image : nd.array

Part of the frame defined by the coordinates in bounding_box

pixels_in_full_frame_ravelled : list

List of ravelled pixels coordinates inside of the given contour

estimated_body_length : int

Estimated length of the contour in pixels.

video_utils.get_pixels(cnt, width, height)[source]

Gets the coordinates list of the pixels inside the contour

Parameters:

cnt : list

List of the coordinates that defines the contour of the blob in a give width and height (it can either be the video width and heigh or the bounding box width and height)

width : int

Width of the frame

height : int

Height of the frame

Returns:

pixels_coordinates_list : list

List of the coordinates of the pixels in a given width and height

video_utils.segment_frame(frame, min_threshold, max_threshold, bkg, ROI, useBkg)[source]

Applies the intensity thresholds (min_threshold and max_threshold) and the mask (ROI) to a given frame. If useBkg is True, the background subtraction operation is applied before thresholding with the given bkg.

Parameters:

frame : nd.array

Frame to be segmented

min_threshold : int

Minimum intensity threshold for the segmentation (value from 0 to 255)

max_threshold : int

Maximum intensity threshold for the segmentation (value from 0 to 255)

bkg : nd.array

Background model to be used in the background subtraction operation

ROI : nd.array

Mask to be applied after thresholding. Ones in the array are pixels to be considered, zeros are pixels to be discarded.

useBkg : bool

Flag indicating whether background subtraction must be performed or not

Returns:

frame_segmented_and_masked : nd.array

Frame with zeros and ones after applying the thresholding and the mask. Pixels with value 1 are valid pixels given the thresholds and the mask.

video_utils.sum_frames_for_bkg_per_episode_in_multiple_files_video(video_path, bkg)[source]

Computes the sum of frames (1 every 100 frames) for a particular episode of the video when the video is splitted in several files

Parameters:

video_path : string

Path to the file of the episode to be added to the background

bkg : nd.array

Zeros array with same width and height as the frame of the video.

Returns:

bkg : nd.array

Array with same width and height as the frame of the video. Contains the sum of (ending_frame - starting_frame) / 100 frames for the given episode

number_of_frames_for_bkg_in_episode : int

Number of frames used to compute the background in the current episode

video_utils.sum_frames_for_bkg_per_episode_in_single_file_video(starting_frame, ending_frame, video_path, bkg)[source]

Computes the sum of frames (1 every 100 frames) for a particular episode of the video when the video is a single file.

Parameters:

starting_frame : int

First frame of the episode

ending_frame : int

Last frame of the episode

video_path : string

Path to the single file of the video

bkg : nd.array

Zeros array with same width and height as the frame of the video.

Returns:

bkg : nd.array

Array with same width and height as the frame of the video. Contains the sum of (ending_frame - starting_frame) / 100 frames for the given episode

number_of_frames_for_bkg_in_episode : int

Number of frames used to compute the background in the current episode