pixyz.losses (Loss API)¶

Loss¶

class pixyz.losses.losses.Loss(p1, p2=None, input_var=None)[source]

Bases: object

input_var
loss_text
mean()[source]
sum()[source]
estimate(x={}, **kwargs)[source]
train(x={}, **kwargs)[source]

Train the implicit (adversarial) loss function.

test(x={}, **kwargs)[source]

Test the implicit (adversarial) loss function.

Negative expected value of log-likelihood (entropy)¶

CrossEntropy¶

class pixyz.losses.CrossEntropy(p1, p2, input_var=None)[source]

Cross entropy, a.k.a., the negative expected value of log-likelihood (Monte Carlo approximation).

where .

loss_text
estimate(x={})[source]

Entropy¶

class pixyz.losses.Entropy(p1, input_var=None)[source]

Entropy (Monte Carlo approximation).

where .

Note:
This class is a special case of the CrossEntropy class. You can get the same result with CrossEntropy.
loss_text
estimate(x={})[source]

StochasticReconstructionLoss¶

class pixyz.losses.StochasticReconstructionLoss(encoder, decoder, input_var=None)[source]

Reconstruction Loss (Monte Carlo approximation).

where .

Note:
This class is a special case of the CrossEntropy class. You can get the same result with CrossEntropy.
loss_text
estimate(x={})[source]

Negative log-likelihood¶

NLL¶

class pixyz.losses.NLL(p, input_var=None)[source]

Negative log-likelihood.

loss_text
estimate(x={})[source]

Lower bound¶

ELBO¶

class pixyz.losses.ELBO(p, approximate_dist, input_var=None)[source]

The evidence lower bound (Monte Carlo approximation).

where .

loss_text
estimate(x={}, batch_size=None)[source]

Divergence¶

KullbackLeibler¶

class pixyz.losses.KullbackLeibler(p1, p2, input_var=None)[source]

Kullback-Leibler divergence (analytical).

loss_text
estimate(x, **kwargs)[source]

Similarity¶

SimilarityLoss¶

class pixyz.losses.SimilarityLoss(p1, p2, input_var=None, var=['z'], margin=0)[source]

Learning Modality-Invariant Representations for Speech and Images (Leidai et. al.)

estimate(x)[source]

MultiModalContrastivenessLoss¶

class pixyz.losses.MultiModalContrastivenessLoss(p1, p2, input_var=None, margin=0.5)[source]

Disentangling by Partitioning: A Representation Learning Framework for Multimodal Sensory Data

estimate(x)[source]

class pixyz.losses.AdversarialJensenShannon(p, q, discriminator, input_var=None, optimizer=<class 'torch.optim.adam.Adam'>, optimizer_params={}, inverse_g_loss=True)[source]

Bases: pixyz.losses.adversarial_loss.AdversarialLoss

where .

loss_text
estimate(x={}, discriminator=False)[source]
d_loss(y1, y2, batch_size)[source]
g_loss(y1, y2, batch_size)[source]

class pixyz.losses.AdversarialKullbackLeibler(q, p, discriminator, **kwargs)[source]

Bases: pixyz.losses.adversarial_loss.AdversarialLoss

where .

Note that this divergence is minimized to close q to p.

loss_text
estimate(x={}, discriminator=False)[source]
g_loss(y1, batch_size)[source]
d_loss(y1, y2, batch_size)[source]

class pixyz.losses.AdversarialWassersteinDistance(p, q, discriminator, clip_value=0.01, **kwargs)[source]

Bases: pixyz.losses.adversarial_loss.AdversarialJensenShannon

loss_text
d_loss(y1, y2, *args, **kwargs)[source]
g_loss(y1, y2, *args, **kwargs)[source]
train(train_x, **kwargs)[source]

Train the implicit (adversarial) loss function.

Loss for special purpose¶

Parameter¶

class pixyz.losses.losses.Parameter(input_var)[source]
estimate(x={}, **kwargs)[source]
loss_text

Operators¶

LossOperator¶

class pixyz.losses.losses.LossOperator(loss1, loss2)[source]
loss_text
estimate(x={}, **kwargs)[source]
train(x, **kwargs)[source]

TODO: Fix

test(x, **kwargs)[source]

TODO: Fix

LossSelfOperator¶

class pixyz.losses.losses.LossSelfOperator(loss1)[source]
train(x={}, **kwargs)[source]

Train the implicit (adversarial) loss function.

test(x={}, **kwargs)[source]

Test the implicit (adversarial) loss function.

class pixyz.losses.losses.AddLoss(loss1, loss2)[source]
loss_text
estimate(x={}, **kwargs)[source]

SubLoss¶

class pixyz.losses.losses.SubLoss(loss1, loss2)[source]
loss_text
estimate(x={}, **kwargs)[source]

MulLoss¶

class pixyz.losses.losses.MulLoss(loss1, loss2)[source]
loss_text
estimate(x={}, **kwargs)[source]

DivLoss¶

class pixyz.losses.losses.DivLoss(loss1, loss2)[source]
loss_text
estimate(x={}, **kwargs)[source]

NegLoss¶

class pixyz.losses.losses.NegLoss(loss1)[source]
loss_text
estimate(x={}, **kwargs)[source]

BatchMean¶

class pixyz.losses.losses.BatchMean(loss1)[source]

Loss averaged over batch data.

where and is a loss function.

loss_text
estimate(x={}, **kwargs)[source]

BatchSum¶

class pixyz.losses.losses.BatchSum(loss1)[source]

Loss summed over batch data.

where and is a loss function.

loss_text
estimate(x={}, **kwargs)[source]