basicsr.losses.__init__

basicsr.losses.__init__.build_loss(opt)[source]

Build loss from options.

Parameters:

opt (dict) – Configuration. It must contain: type (str): Model type.

basicsr.losses.__init__.g_path_regularize(fake_img, latents, mean_path_length, decay=0.01)[source]
basicsr.losses.__init__.gradient_penalty_loss(discriminator, real_data, fake_data, weight=None)[source]

Calculate gradient penalty for wgan-gp.

Parameters:
  • discriminator (nn.Module) – Network for the discriminator.

  • real_data (Tensor) – Real input data.

  • fake_data (Tensor) – Fake input data.

  • weight (Tensor) – Weight tensor. Default: None.

Returns:

A tensor for gradient penalty.

Return type:

Tensor

basicsr.losses.__init__.r1_penalty(real_pred, real_img)[source]

R1 regularization for discriminator. The core idea is to penalize the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game.

Reference: Eq. 9 in Which training methods for GANs do actually converge.