pixyz.losses (Loss API)

Loss

class pixyz.losses.losses.Loss(p1, p2=None, input_var=None)[source]

Bases: object

input_var
loss_text
mean()[source]
sum()[source]
estimate(x={}, **kwargs)[source]
train(x={}, **kwargs)[source]

Train the implicit (adversarial) loss function.

test(x={}, **kwargs)[source]

Test the implicit (adversarial) loss function.

Negative expected value of log-likelihood (entropy)

CrossEntropy

class pixyz.losses.CrossEntropy(p1, p2, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

Cross entropy, a.k.a., the negative expected value of log-likelihood (Monte Carlo approximation).

-\mathbb{E}_{q(x)}[\log p(x)] \approx -\frac{1}{L}\sum_{l=1}^L \log p(x_l),

where x_l \sim q(x).

loss_text
estimate(x={})[source]

Entropy

class pixyz.losses.Entropy(p1, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

Entropy (Monte Carlo approximation).

-\mathbb{E}_{p(x)}[\log p(x)] \approx -\frac{1}{L}\sum_{l=1}^L \log p(x_l),

where x_l \sim p(x).

Note:
This class is a special case of the CrossEntropy class. You can get the same result with CrossEntropy.
loss_text
estimate(x={})[source]

StochasticReconstructionLoss

class pixyz.losses.StochasticReconstructionLoss(encoder, decoder, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

Reconstruction Loss (Monte Carlo approximation).

-\mathbb{E}_{q(z|x)}[\log p(x|z)] \approx -\frac{1}{L}\sum_{l=1}^L \log p(x|z_l),

where z_l \sim q(z|x).

Note:
This class is a special case of the CrossEntropy class. You can get the same result with CrossEntropy.
loss_text
estimate(x={})[source]

Negative log-likelihood

NLL

class pixyz.losses.NLL(p, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

Negative log-likelihood.

\log p(x)

loss_text
estimate(x={})[source]

Lower bound

ELBO

class pixyz.losses.ELBO(p, approximate_dist, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

The evidence lower bound (Monte Carlo approximation).

\mathbb{E}_{q(z|x)}[\log \frac{p(x,z)}{q(z|x)}] \approx \frac{1}{L}\sum_{l=1}^L \log p(x, z_l),

where z_l \sim q(z|x).

loss_text
estimate(x={}, batch_size=None)[source]

Similarity

SimilarityLoss

class pixyz.losses.SimilarityLoss(p1, p2, input_var=None, var=['z'], margin=0)[source]

Bases: pixyz.losses.losses.Loss

Learning Modality-Invariant Representations for Speech and Images (Leidai et. al.)

estimate(x)[source]

MultiModalContrastivenessLoss

class pixyz.losses.MultiModalContrastivenessLoss(p1, p2, input_var=None, margin=0.5)[source]

Bases: pixyz.losses.losses.Loss

Disentangling by Partitioning: A Representation Learning Framework for Multimodal Sensory Data

estimate(x)[source]

Adversarial loss (GAN loss)

AdversarialJSDivergence

class pixyz.losses.AdversarialJSDivergence(p_data, p, discriminator, input_var=None, optimizer=<class 'torch.optim.adam.Adam'>, optimizer_params={}, inverse_g_loss=True)[source]

Bases: pixyz.losses.losses.Loss

Adversarial loss (Jensen-Shannon divergence).

\mathcal{L}_{adv} = 2 \dot JS[p_{data}(x)||p(x)] + const.

loss_text
estimate(x={}, discriminator=False)[source]
d_loss(y1, y2, batch_size)[source]
g_loss(y1, y2, batch_size)[source]
train(train_x, **kwargs)[source]

Train the implicit (adversarial) loss function.

test(test_x, **kwargs)[source]

Test the implicit (adversarial) loss function.

AdversarialWassersteinDistance

class pixyz.losses.AdversarialWassersteinDistance(p_data, p, discriminator, clip_value=0.01, **kwargs)[source]

Bases: pixyz.losses.adversarial_loss.AdversarialJSDivergence

Adversarial loss (Wasserstein Distance).

loss_text
d_loss(y1, y2, *args, **kwargs)[source]
g_loss(y1, y2, *args, **kwargs)[source]
train(train_x, **kwargs)[source]

Train the implicit (adversarial) loss function.

Loss for special purpose

Parameter

class pixyz.losses.losses.Parameter(input_var)[source]

Bases: pixyz.losses.losses.Loss

estimate(x={}, **kwargs)[source]
loss_text

Operators

LossOperator

class pixyz.losses.losses.LossOperator(loss1, loss2)[source]

Bases: pixyz.losses.losses.Loss

loss_text
estimate(x={}, **kwargs)[source]
train(x, **kwargs)[source]

TODO: Fix

test(x, **kwargs)[source]

TODO: Fix

LossSelfOperator

class pixyz.losses.losses.LossSelfOperator(loss1)[source]

Bases: pixyz.losses.losses.Loss

train(x={}, **kwargs)[source]

Train the implicit (adversarial) loss function.

test(x={}, **kwargs)[source]

Test the implicit (adversarial) loss function.

AddLoss

class pixyz.losses.losses.AddLoss(loss1, loss2)[source]

Bases: pixyz.losses.losses.LossOperator

loss_text
estimate(x={}, **kwargs)[source]

SubLoss

class pixyz.losses.losses.SubLoss(loss1, loss2)[source]

Bases: pixyz.losses.losses.LossOperator

loss_text
estimate(x={}, **kwargs)[source]

MulLoss

class pixyz.losses.losses.MulLoss(loss1, loss2)[source]

Bases: pixyz.losses.losses.LossOperator

loss_text
estimate(x={}, **kwargs)[source]

DivLoss

class pixyz.losses.losses.DivLoss(loss1, loss2)[source]

Bases: pixyz.losses.losses.LossOperator

loss_text
estimate(x={}, **kwargs)[source]

NegLoss

class pixyz.losses.losses.NegLoss(loss1)[source]

Bases: pixyz.losses.losses.LossSelfOperator

loss_text
estimate(x={}, **kwargs)[source]

BatchMean

class pixyz.losses.losses.BatchMean(loss1)[source]

Bases: pixyz.losses.losses.LossSelfOperator

Loss averaged over batch data.

\mathbb{E}_{p_{data}(x)}[\mathcal{L}(x)] \approx \frac{1}{N}\sum_{i=1}^N \mathcal{L}(x_i),

where x_i \sim p_{data}(x) and \mathcal{L} is a loss function.

loss_text
estimate(x={}, **kwargs)[source]

BatchSum

class pixyz.losses.losses.BatchSum(loss1)[source]

Bases: pixyz.losses.losses.LossSelfOperator

Loss summed over batch data.

\sum_{i=1}^N \mathcal{L}(x_i),

where x_i \sim p_{data}(x) and \mathcal{L} is a loss function.

loss_text
estimate(x={}, **kwargs)[source]