Skip to content

SSDLoss

SSDLoss class

dlf.losses.ssd_loss.SSDLoss(neg_ratio, num_classes, name="ssd_loss", from_logits=False)

Loss base class.

To be implemented by subclasses: * call(): Contains the logic for loss calculation using y_true, y_pred.

Example subclass implementation:

class MeanSquaredError(Loss):

  def call(self, y_true, y_pred):
    y_pred = tf.convert_to_tensor_v2(y_pred)
    y_true = tf.cast(y_true, y_pred.dtype)
    return tf.reduce_mean(math_ops.square(y_pred - y_true), axis=-1)

When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, please use 'SUM' or 'NONE' reduction types, and reduce losses explicitly in your training loop. Using 'AUTO' or 'SUM_OVER_BATCH_SIZE' will raise an error.

Please see this custom training tutorial for more details on this.

You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:

with strategy.scope():
  loss_obj = tf.keras.losses.CategoricalCrossentropy(
      reduction=tf.keras.losses.Reduction.NONE)
  ....
  loss = (tf.reduce_sum(loss_obj(labels, predictions)) *
          (1. / global_batch_size))