Skip to content

experiment

Experiment class

dlf.core.experiment.Experiment(config)

A representation of an experiment

This class represents a setup of an experiment. Based on *.yaml configuration file this class maps these values and initializes all required elements.

Args

  • config: str. Path to a configuration file

Raises

  • FileNotFoundError: If the specified configuration file does not exists
  • KeyError: If required keys are missing in a configuration file

ExperimentTarget class

dlf.core.experiment.ExperimentTarget(*args, **kwargs)

Enum to identify the current state during the experiment


InputReaderSettings class

dlf.core.experiment.InputReaderSettings(settings)

Container for all selected input readers

Args

  • settings: dict[str,Any]. Dictionary of input_reader section

YAML Configuration

input_reader:
    training_reader:
        name: fs_random_unpaired_reader
        path_lhs: /mnt/data/datasets/theodore_wheeled_walker/*_img.png
        path_rhs: /mnt/data/datasets/omnidetector-Flat/JPEGImages/*.jpg
        lhs_limit: 10000
        rhs_limit: 10000
        shuffle_buffer: 100
        preprocess_list:
        resize:
            output_shape:
            - 512
            - 512

set_batch_size method

InputReaderSettings.set_batch_size(batch_size)

Updates the batch size of all input readers

Args

  • batch_size: int. Size of a batch

ModelSettings class

dlf.core.experiment.ModelSettings(settings)

Object to describe model specific settings

Args

  • settings: dict[str,Any]. Dictionary to initialize a model

Raises

  • KeyError: If a specified model is not available or the setup is wrong

SaveStrategy class

dlf.core.experiment.SaveStrategy(monitor, mode)

Represents a saving strategy for a model during training

This class implements a simple saving strategy to save models during training. With the configuration file you have the opportunity to specify different metrics and losses. Each of them have an unique name and regarding the evaluation target ... an alias will be prepended.

For instance you setup the metric SparseMeanIOU. The value of this metric will be propagated through all evaluation steps. For training the monitor value will be train_sparse_mean_iou and for validation val_sparse_mean_iou. As second parameter you have to specify whether to save the model if the min or max value of this metric improves during training.

Args

  • monitor: str. Value to monitor during training
  • mode: {'max','min'}. Whether to save the model if the minimum value or maximum value improves

need_to_save method

SaveStrategy.need_to_save(monitor_values)

Method which decides whether a model should be saved or not

Args

  • monitor_values: dict[str,float]. Dictionary containing monitor values

Returns

  • bool: Describes whether a model has been improved and needs to be saved or not.

TrainingSettings class

dlf.core.experiment.TrainingSettings(settings)

Object to describe the selected training settings

This object parses the training section of a config file. Based on the configuration callbacks and metrics are initialized for the experiment.

Args

  • settings: dict[key, Any]. The training section of configuration file

Raises

  • Exception: If the monitor value is not specified
  • KeyError: If the configuration files contains not all required keys for the training section

generate_metrics method

TrainingSettings.generate_metrics()

Generate instances of the selected and available metrics

Returns

A List of instances of selected metrics