pixyz.losses (Loss API)

Loss

class pixyz.losses.losses.Loss(p1, p2=None, input_var=None)[source]

Bases: object

input_var
loss_text
abs()[source]
mean()[source]
sum()[source]
estimate(x={}, **kwargs)[source]
train(x={}, **kwargs)[source]

Train the implicit (adversarial) loss function.

test(x={}, **kwargs)[source]

Test the implicit (adversarial) loss function.

Negative expected value of log-likelihood (entropy)

CrossEntropy

class pixyz.losses.CrossEntropy(p1, p2, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

Cross entropy, a.k.a., the negative expected value of log-likelihood (Monte Carlo approximation).

-\mathbb{E}_{q(x)}[\log p(x)] \approx -\frac{1}{L}\sum_{l=1}^L \log p(x_l),

where x_l \sim q(x).

loss_text
estimate(x={})[source]

Entropy

class pixyz.losses.Entropy(p1, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

Entropy (Monte Carlo approximation).

-\mathbb{E}_{p(x)}[\log p(x)] \approx -\frac{1}{L}\sum_{l=1}^L \log p(x_l),

where x_l \sim p(x).

Note:
This class is a special case of the CrossEntropy class. You can get the same result with CrossEntropy.
loss_text
estimate(x={})[source]

StochasticReconstructionLoss

class pixyz.losses.StochasticReconstructionLoss(encoder, decoder, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

Reconstruction Loss (Monte Carlo approximation).

-\mathbb{E}_{q(z|x)}[\log p(x|z)] \approx -\frac{1}{L}\sum_{l=1}^L \log p(x|z_l),

where z_l \sim q(z|x).

Note:
This class is a special case of the CrossEntropy class. You can get the same result with CrossEntropy.
loss_text
estimate(x={})[source]

Negative log-likelihood

NLL

class pixyz.losses.NLL(p, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

Negative log-likelihood.

-\log p(x)

loss_text
estimate(x={})[source]

Lower bound

ELBO

class pixyz.losses.ELBO(p, approximate_dist, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

The evidence lower bound (Monte Carlo approximation).

\mathbb{E}_{q(z|x)}[\log \frac{p(x,z)}{q(z|x)}] \approx \frac{1}{L}\sum_{l=1}^L \log p(x, z_l),

where z_l \sim q(z|x).

loss_text
estimate(x={}, batch_size=None)[source]

Divergence

KullbackLeibler

class pixyz.losses.KullbackLeibler(p1, p2, input_var=None)[source]

Bases: pixyz.losses.losses.Loss

Kullback-Leibler divergence (analytical).

D_{KL}[p||q] = \mathbb{E}_{p(x)}[\log \frac{p(x)}{q(x)}]

loss_text
estimate(x, **kwargs)[source]

Similarity

SimilarityLoss

class pixyz.losses.SimilarityLoss(p1, p2, input_var=None, var=['z'], margin=0)[source]

Bases: pixyz.losses.losses.Loss

Learning Modality-Invariant Representations for Speech and Images (Leidai et. al.)

estimate(x)[source]

MultiModalContrastivenessLoss

class pixyz.losses.MultiModalContrastivenessLoss(p1, p2, input_var=None, margin=0.5)[source]

Bases: pixyz.losses.losses.Loss

Disentangling by Partitioning: A Representation Learning Framework for Multimodal Sensory Data

estimate(x)[source]

Adversarial loss (GAN loss)

AdversarialJensenShannon

class pixyz.losses.AdversarialJensenShannon(p, q, discriminator, input_var=None, optimizer=<class 'torch.optim.adam.Adam'>, optimizer_params={}, inverse_g_loss=True)[source]

Bases: pixyz.losses.adversarial_loss.AdversarialLoss

Jensen-Shannon divergence (adversarial training).

D_{JS}[p(x)||q(x)] \leq 2 \cdot D_{JS}[p(x)||q(x)] + 2 \log 2
 = \mathbb{E}_{p(x)}[\log d^*(x)] + \mathbb{E}_{q(x)}[\log (1-d^*(x))],

where d^*(x) = \arg\max_{d} \mathbb{E}_{p(x)}[\log d(x)] + \mathbb{E}_{q(x)}[\log (1-d(x))].

loss_text
estimate(x={}, discriminator=False)[source]
d_loss(y1, y2, batch_size)[source]
g_loss(y1, y2, batch_size)[source]

AdversarialKullbackLeibler

class pixyz.losses.AdversarialKullbackLeibler(q, p, discriminator, **kwargs)[source]

Bases: pixyz.losses.adversarial_loss.AdversarialLoss

Kullback-Leibler divergence (adversarial training).

D_{KL}[q(x)||p(x)] = \mathbb{E}_{q(x)}[\log \frac{q(x)}{p(x)}]
 = \mathbb{E}_{q(x)}[\log \frac{d^*(x)}{1-d^*(x)}],

where d^*(x) = \arg\max_{d} \mathbb{E}_{p(x)}[\log d(x)] + \mathbb{E}_{q(x)}[\log (1-d(x))].

Note that this divergence is minimized to close q to p.

loss_text
estimate(x={}, discriminator=False)[source]
g_loss(y1, batch_size)[source]
d_loss(y1, y2, batch_size)[source]

AdversarialWassersteinDistance

class pixyz.losses.AdversarialWassersteinDistance(p, q, discriminator, clip_value=0.01, **kwargs)[source]

Bases: pixyz.losses.adversarial_loss.AdversarialJensenShannon

Wasserstein distance (adversarial training).

W(p, q) = \sup_{||d||_{L} \leq 1} \mathbb{E}_{p(x)}[d(x)] - \mathbb{E}_{q(x)}[d(x)]

loss_text
d_loss(y1, y2, *args, **kwargs)[source]
g_loss(y1, y2, *args, **kwargs)[source]
train(train_x, **kwargs)[source]

Train the implicit (adversarial) loss function.

Auto-regressive loss

ARLoss

class pixyz.losses.ARLoss(step_loss, last_loss=None, step_fn=<function ARLoss.<lambda>>, max_iter=1, return_params=False, input_var=None, series_var=None, update_value=None)[source]

Bases: pixyz.losses.losses.Loss

Auto-regressive loss.

This loss performs “scan”-like operation. You can implement arbitrary auto-regressive models with this class.

\mathcal{L} = \mathcal{L}_{last}(x_1, h_T) + \sum_{t=1}^{T}\mathcal{L}_{step}(x_t, h_t),

where h_t = f_{step}(x_{t-1}, h_{t-1}).

loss_text
slice_step_from_inputs(t, x)[source]
estimate(x={})[source]

Loss for special purpose

Parameter

class pixyz.losses.losses.Parameter(input_var)[source]

Bases: pixyz.losses.losses.Loss

estimate(x={}, **kwargs)[source]
loss_text

Operators

LossOperator

class pixyz.losses.losses.LossOperator(loss1, loss2)[source]

Bases: pixyz.losses.losses.Loss

loss_text
estimate(x={}, **kwargs)[source]
train(x, **kwargs)[source]

TODO: Fix

test(x, **kwargs)[source]

TODO: Fix

LossSelfOperator

class pixyz.losses.losses.LossSelfOperator(loss1)[source]

Bases: pixyz.losses.losses.Loss

train(x={}, **kwargs)[source]

Train the implicit (adversarial) loss function.

test(x={}, **kwargs)[source]

Test the implicit (adversarial) loss function.

AddLoss

class pixyz.losses.losses.AddLoss(loss1, loss2)[source]

Bases: pixyz.losses.losses.LossOperator

loss_text
estimate(x={}, **kwargs)[source]

SubLoss

class pixyz.losses.losses.SubLoss(loss1, loss2)[source]

Bases: pixyz.losses.losses.LossOperator

loss_text
estimate(x={}, **kwargs)[source]

MulLoss

class pixyz.losses.losses.MulLoss(loss1, loss2)[source]

Bases: pixyz.losses.losses.LossOperator

loss_text
estimate(x={}, **kwargs)[source]

DivLoss

class pixyz.losses.losses.DivLoss(loss1, loss2)[source]

Bases: pixyz.losses.losses.LossOperator

loss_text
estimate(x={}, **kwargs)[source]

NegLoss

class pixyz.losses.losses.NegLoss(loss1)[source]

Bases: pixyz.losses.losses.LossSelfOperator

loss_text
estimate(x={}, **kwargs)[source]

AbsLoss

class pixyz.losses.losses.AbsLoss(loss1)[source]

Bases: pixyz.losses.losses.LossSelfOperator

loss_text
estimate(x={}, **kwargs)[source]

BatchMean

class pixyz.losses.losses.BatchMean(loss1)[source]

Bases: pixyz.losses.losses.LossSelfOperator

Loss averaged over batch data.

\mathbb{E}_{p_{data}(x)}[\mathcal{L}(x)] \approx \frac{1}{N}\sum_{i=1}^N \mathcal{L}(x_i),

where x_i \sim p_{data}(x) and \mathcal{L} is a loss function.

loss_text
estimate(x={}, **kwargs)[source]

BatchSum

class pixyz.losses.losses.BatchSum(loss1)[source]

Bases: pixyz.losses.losses.LossSelfOperator

Loss summed over batch data.

\sum_{i=1}^N \mathcal{L}(x_i),

where x_i \sim p_{data}(x) and \mathcal{L} is a loss function.

loss_text
estimate(x={}, **kwargs)[source]