basicsr.archs.hifacegan_util

class basicsr.archs.hifacegan_util.BaseNetwork[source]

Bases: Module

A basis for hifacegan archs with custom initialization

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_weights(init_type='normal', gain=0.02)[source]
training: bool
class basicsr.archs.hifacegan_util.LIPEncoder(input_nc, ngf, sw, sh, n_2xdown, norm_layer=<class 'torch.nn.modules.instancenorm.InstanceNorm2d'>)[source]

Bases: BaseNetwork

Local Importance-based Pooling (Ziteng Gao et.al.,ICCV 2019)

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class basicsr.archs.hifacegan_util.SPADE(config_text, norm_nc, label_nc)[source]

Bases: Module

forward(x, segmap)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class basicsr.archs.hifacegan_util.SPADEResnetBlock(fin, fout, norm_g='spectralspadesyncbatch3x3', semantic_nc=3)[source]

Bases: Module

ResNet block that uses SPADE. It differs from the ResNet block of pix2pixHD in that it takes in the segmentation map as input, learns the skip connection if necessary, and applies normalization first and then convolution. This architecture seemed like a standard architecture for unconditional or class-conditional GAN architecture using residual block. The code was inspired from https://github.com/LMescheder/GAN_stability.

act(x)[source]
forward(x, seg)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

shortcut(x, seg)[source]
training: bool
class basicsr.archs.hifacegan_util.SimplifiedLIP(channels)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_layer()[source]
training: bool
class basicsr.archs.hifacegan_util.SoftGate[source]

Bases: Module

COEFF = 12.0
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
basicsr.archs.hifacegan_util.get_nonspade_norm_layer(norm_type='instance')[source]
basicsr.archs.hifacegan_util.lip2d(x, logit, kernel=3, stride=2, padding=1)[source]