pixyz.flows (Flow layers)

Flow

class pixyz.flows.Flow(in_features)[source]

Bases: torch.nn.modules.module.Module

Flow class. In Pixyz, all flows are required to inherit this class.

__init__(in_features)[source]
Parameters:in_features (int) – Size of input data.
in_features
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

logdet_jacobian

Get log-determinant Jacobian.

Before calling this, you should run forward or update_jacobian methods to calculate and store log-determinant Jacobian.

class pixyz.flows.FlowList(flow_list)[source]

Bases: pixyz.flows.flows.Flow

__init__(flow_list)[source]

Hold flow modules in a list.

Once initializing, it can be handled as a single flow module.

Notes

Indexing is not supported for now.

Parameters:flow_list (list) –
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

Normalizing flow

PlanarFlow

class pixyz.flows.PlanarFlow(in_features, constraint_u=False)[source]

Bases: pixyz.flows.flows.Flow

Planar flow.

f(\mathbf{x}) = \mathbf{x} + \mathbf{u} h( \mathbf{w}^T \mathbf{x} + \mathbf{b})

deriv_tanh(x)[source]
reset_parameters()[source]
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

extra_repr()[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

Coupling layer

AffineCoupling

class pixyz.flows.AffineCoupling(in_features, mask_type='channel_wise', scale_net=None, translate_net=None, scale_translate_net=None, inverse_mask=False)[source]

Bases: pixyz.flows.flows.Flow

Affine coupling layer

\begin{eqnarray*}
\mathbf{y}_{1:d} &=& \mathbf{x}_{1:d} \\
\mathbf{y}_{d+1:D} &=& \mathbf{x}_{d+1:D} \odot \exp(s(\mathbf{x}_{1:d})+t(\mathbf{x}_{1:d}))
\end{eqnarray*}

build_mask(x)[source]
Parameters:x (torch.Tensor) –
Returns:mask
Return type:torch.tensor

Examples

>>> scale_translate_net = lambda x: (x, x)
>>> f1 = AffineCoupling(4, mask_type="channel_wise", scale_translate_net=scale_translate_net,
...                     inverse_mask=False)
>>> x1 = torch.randn([1,4,3,3])
>>> f1.build_mask(x1)
tensor([[[[1.]],
<BLANKLINE>
         [[1.]],
<BLANKLINE>
         [[0.]],
<BLANKLINE>
         [[0.]]]])
>>> f2 = AffineCoupling(2, mask_type="checkerboard", scale_translate_net=scale_translate_net,
...                     inverse_mask=True)
>>> x2 = torch.randn([1,2,5,5])
>>> f2.build_mask(x2)
tensor([[[[0., 1., 0., 1., 0.],
          [1., 0., 1., 0., 1.],
          [0., 1., 0., 1., 0.],
          [1., 0., 1., 0., 1.],
          [0., 1., 0., 1., 0.]]]])
get_parameters(x, y=None)[source]
Parameters:
  • x (torch.tensor) –
  • y (torch.tensor) –
Returns:

  • s (torch.tensor)
  • t (torch.tensor)

Examples

>>> # In case of using scale_translate_net
>>> scale_translate_net = lambda x: (x, x)
>>> f1 = AffineCoupling(4, mask_type="channel_wise", scale_translate_net=scale_translate_net,
...                     inverse_mask=False)
>>> x1 = torch.randn([1,4,3,3])
>>> log_s, t = f1.get_parameters(x1)
>>> # In case of using scale_net and translate_net
>>> scale_net = lambda x: x
>>> translate_net = lambda x: x
>>> f2 = AffineCoupling(4, mask_type="channel_wise", scale_net=scale_net, translate_net=translate_net,
...                     inverse_mask=False)
>>> x2 = torch.randn([1,4,3,3])
>>> log_s, t = f2.get_parameters(x2)
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

extra_repr()[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

Invertible layer

ChannelConv

class pixyz.flows.ChannelConv(in_channels, decomposed=False)[source]

Bases: pixyz.flows.flows.Flow

Invertible 1 × 1 convolution.

Notes

This is implemented with reference to the following code. https://github.com/chaiyujin/glow-pytorch/blob/master/glow/modules.py

get_parameters(x, inverse)[source]
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(x, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

Operation layer

Squeeze

class pixyz.flows.Squeeze[source]

Bases: pixyz.flows.flows.Flow

Squeeze operation.

c * s * s -> 4c * s/2 * s/2

Examples

>>> import torch
>>> a = torch.tensor([i+1 for i in range(16)]).view(1,1,4,4)
>>> print(a)
tensor([[[[ 1,  2,  3,  4],
          [ 5,  6,  7,  8],
          [ 9, 10, 11, 12],
          [13, 14, 15, 16]]]])
>>> f = Squeeze()
>>> print(f(a))
tensor([[[[ 1,  3],
          [ 9, 11]],
<BLANKLINE>
         [[ 2,  4],
          [10, 12]],
<BLANKLINE>
         [[ 5,  7],
          [13, 15]],
<BLANKLINE>
         [[ 6,  8],
          [14, 16]]]])
>>> print(f.inverse(f(a)))
tensor([[[[ 1,  2,  3,  4],
          [ 5,  6,  7,  8],
          [ 9, 10, 11, 12],
          [13, 14, 15, 16]]]])
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

Unsqueeze

class pixyz.flows.Unsqueeze[source]

Bases: pixyz.flows.operations.Squeeze

Unsqueeze operation.

c * s * s -> c/4 * 2s * 2s

Examples

>>> import torch
>>> a = torch.tensor([i+1 for i in range(16)]).view(1,4,2,2)
>>> print(a)
tensor([[[[ 1,  2],
          [ 3,  4]],
<BLANKLINE>
         [[ 5,  6],
          [ 7,  8]],
<BLANKLINE>
         [[ 9, 10],
          [11, 12]],
<BLANKLINE>
         [[13, 14],
          [15, 16]]]])
>>> f = Unsqueeze()
>>> print(f(a))
tensor([[[[ 1,  5,  2,  6],
          [ 9, 13, 10, 14],
          [ 3,  7,  4,  8],
          [11, 15, 12, 16]]]])
>>> print(f.inverse(f(a)))
tensor([[[[ 1,  2],
          [ 3,  4]],
<BLANKLINE>
         [[ 5,  6],
          [ 7,  8]],
<BLANKLINE>
         [[ 9, 10],
          [11, 12]],
<BLANKLINE>
         [[13, 14],
          [15, 16]]]])
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

Permutation

class pixyz.flows.Permutation(permute_indices)[source]

Bases: pixyz.flows.flows.Flow

Examples

>>> import torch
>>> a = torch.tensor([i+1 for i in range(16)]).view(1,4,2,2)
>>> print(a)
tensor([[[[ 1,  2],
          [ 3,  4]],
<BLANKLINE>
         [[ 5,  6],
          [ 7,  8]],
<BLANKLINE>
         [[ 9, 10],
          [11, 12]],
<BLANKLINE>
         [[13, 14],
          [15, 16]]]])
>>> perm = [0,3,1,2]
>>> f = Permutation(perm)
>>> f(a)
tensor([[[[ 1,  2],
          [ 3,  4]],
<BLANKLINE>
         [[13, 14],
          [15, 16]],
<BLANKLINE>
         [[ 5,  6],
          [ 7,  8]],
<BLANKLINE>
         [[ 9, 10],
          [11, 12]]]])
>>> f.inverse(f(a))
tensor([[[[ 1,  2],
          [ 3,  4]],
<BLANKLINE>
         [[ 5,  6],
          [ 7,  8]],
<BLANKLINE>
         [[ 9, 10],
          [11, 12]],
<BLANKLINE>
         [[13, 14],
          [15, 16]]]])
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

Shuffle

class pixyz.flows.Shuffle(in_features)[source]

Bases: pixyz.flows.operations.Permutation

Reverse

class pixyz.flows.Reverse(in_features)[source]

Bases: pixyz.flows.operations.Permutation

Flatten

class pixyz.flows.Flatten(in_size=None)[source]

Bases: pixyz.flows.flows.Flow

forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

BatchNorm1d

class pixyz.flows.BatchNorm1d(in_features, momentum=0.0)[source]

Bases: pixyz.flows.flows.Flow

A batch normalization with the inverse transformation.

Notes

This is implemented with reference to the following code. https://github.com/ikostrikov/pytorch-flows/blob/master/flows.py#L205

Examples

>>> x = torch.randn(20, 100)
>>> f = BatchNorm1d(100)
>>> # transformation
>>> z = f(x)
>>> # reconstruction
>>> _x = f.inverse(f(x))
>>> # check this reconstruction
>>> diff = torch.sum(torch.abs(_x-x)).item()
>>> diff < 0.1
True
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

BatchNorm2d

class pixyz.flows.BatchNorm2d(in_features, momentum=0.0)[source]

Bases: pixyz.flows.normalizations.BatchNorm1d

A batch normalization with the inverse transformation.

Notes

This is implemented with reference to the following code. https://github.com/ikostrikov/pytorch-flows/blob/master/flows.py#L205

Examples

>>> x = torch.randn(20, 100, 35, 45)
>>> f = BatchNorm2d(100)
>>> # transformation
>>> z = f(x)
>>> # reconstruction
>>> _x = f.inverse(f(x))
>>> # check this reconstruction
>>> diff = torch.sum(torch.abs(_x-x)).item()
>>> diff < 0.1
True

ActNorm2d

class pixyz.flows.ActNorm2d(in_features, scale=1.0)[source]

Bases: pixyz.flows.flows.Flow

Activation Normalization Initialize the bias and scale with a given minibatch, so that the output per-channel have zero mean and unit variance for that. After initialization, bias and logs will be trained as parameters.

Notes

This is implemented with reference to the following code. https://github.com/chaiyujin/glow-pytorch/blob/master/glow/modules.py

initialize_parameters(x)[source]
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(x, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor

Preprocess

class pixyz.flows.Preprocess[source]

Bases: pixyz.flows.flows.Flow

static logit(x)[source]
forward(x, y=None, compute_jacobian=True)[source]

Forward propagation of flow layers.

Parameters:
  • x (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
  • compute_jacobian (bool, defaults to True) – Whether to calculate and store log-determinant Jacobian. If true, calculated Jacobian values are stored in logdet_jacobian.
Returns:

z

Return type:

torch.Tensor

inverse(z, y=None)[source]

Backward (inverse) propagation of flow layers. In this method, log-determinant Jacobian is not calculated.

Parameters:
  • z (torch.Tensor) – Input data.
  • y (torch.Tensor, defaults to None) – Data for conditioning.
Returns:

x

Return type:

torch.Tensor