Architectures¶
Temporal U-Net¶
- class campd.architectures.diffusion.temporal_unet.TemporalUnetCfg¶
Bases:
BaseModel- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class campd.architectures.diffusion.temporal_unet.TemporalUnet¶
Bases:
ReverseDiffusionNetwork- conditioning_key = 'all'¶
- __init__(config)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- Parameters:
config (TemporalUnetCfg)
- classmethod from_config(config)¶
- Parameters:
config (TemporalUnetCfg | dict)
- forward(x, t, embedded_context_batch)¶
x : [ batch x horizon x state_dim ] t : [ batch ] (int or float usually, but here Tensor) embedded_context_batch: EmbeddedContext, assumed to have a key called “all” that contains all the embeddings stacked on the second dimension
- Return type:
- Parameters:
x (Tensor)
t (Tensor)
embedded_context_batch (EmbeddedContext)
Reverse Diffusion Base¶
- class campd.architectures.diffusion.base.ReverseDiffusionNetwork¶
-
Abstract base class for neural networks that learn the reverse diffusion process.
- abstractmethod forward(x, t, embedded_context_batch)¶
Forward pass of the reserve diffusion network.
- Parameters:
x (
Tensor) – The batched noisy input data.t (
Tensor) – The batched diffusion timestep(s).embedded_context_batch (
EmbeddedContext) – The embedded context for the batch.
- Returns:
The batched predicted noise or denoised data.
- Return type:
Context Encoder¶
- class campd.architectures.context.encoder.ContextEncoderCfg¶
Bases:
BaseModel- key_networks: Mapping[str, Spec[KeyNetModule]]¶
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class campd.architectures.context.encoder.ContextEncoder¶
Bases:
ModuleEncodes TrajectoryContext into EmbeddedContext using a dedicated network for each context key.
- __init__(config)¶
- Parameters:
config (
ContextEncoderCfg) – ContextEncoderCfg object or dictionary.
- key_networks: nn.ModuleDict[str, KeyNetModule]¶
- classmethod from_config(config)¶
Factory method to create ContextEncoder from config.
- Return type:
- Parameters:
config (ContextEncoderCfg | dict)
- forward(context)¶
- Parameters:
context (
TrajectoryContext) – TrajectoryContext to encode.- Return type:
- Returns:
EmbeddedContext containing the encoded context.
Layers¶
Core neural network layers and building blocks for CAMPD architectures.
Includes standard MLP implementations, Temporal U-Net residual blocks, attention mechanisms, and various normalizations/activations.
- campd.architectures.layers.layers.ACTIVATIONS = {'elu': <class 'torch.nn.modules.activation.ELU'>, 'identity': <class 'torch.nn.modules.linear.Identity'>, 'leaky_relu': <class 'torch.nn.modules.activation.LeakyReLU'>, 'mish': <class 'torch.nn.modules.activation.Mish'>, 'prelu': <class 'torch.nn.modules.activation.PReLU'>, 'relu': <class 'torch.nn.modules.activation.ReLU'>, 'sigmoid': <class 'torch.nn.modules.activation.Sigmoid'>, 'softplus': <class 'torch.nn.modules.activation.Softplus'>, 'tanh': <class 'torch.nn.modules.activation.Tanh'>}¶
Dictionary mapping activation function names to their PyTorch module classes.
- class campd.architectures.layers.layers.MLP1DCfg¶
Bases:
BaseModelConfiguration for a 1D Multi-Layer Perceptron.
Dimension of hidden layers.
- act: Literal['relu', 'sigmoid', 'tanh', 'leaky_relu', 'elu', 'prelu', 'softplus', 'mish', 'identity']¶
Activation function name (e.g., ‘relu’, ‘mish’, ‘elu’).
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class campd.architectures.layers.layers.MLP1D¶
Bases:
ModuleA standard 1D Multi-Layer Perceptron (MLP) module.
Constructs a sequence of Linear -> [LayerNorm] -> Activation layers.
- __init__(config)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- Parameters:
config (MLP1DCfg)
- classmethod from_config(config)¶
Instantiates an MLP1D from a configuration object or dictionary.
- forward(x)¶
Forward pass through the MLP.
- Parameters:
x (torch.Tensor) – Input tensor.
- Returns:
Output tensor representing the processed features.
- Return type:
- class campd.architectures.layers.layers.Residual¶
Bases:
ModuleApplies a residual connection around a given function/module.
- Parameters:
fn (nn.Module) – The module to wrap.
- __init__(fn)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, *args, **kwargs)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.layers.PreNorm¶
Bases:
ModuleApplies LayerNorm before a given function/module.
- Parameters:
dim (int) – Feature dimension for normalization.
fn (nn.Module) – The module to wrap.
- __init__(dim, fn)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.layers.LayerNorm¶
Bases:
ModuleCustom LayerNorm implementation avoiding standard PyTorch constraints.
- __init__(dim, eps=1e-05)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.layers.TimeEncoder¶
Bases:
ModuleEncodes time steps using sinusoidal embeddings followed by an MLP.
- __init__(dim, dim_out)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.layers.SinusoidalPosEmb¶
Bases:
ModuleSinusoidal positional embeddings for time/position encoding.
- Parameters:
dim (int) – Embedding dimension. Must be an even number.
- __init__(dim)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.layers.Downsample1d¶
Bases:
ModuleDownsamples a 1D sequence using a strided convolution.
- Parameters:
dim (int) – Number of channels.
- __init__(dim)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.layers.Upsample1d¶
Bases:
ModuleUpsamples a 1D sequence using a transposed convolution.
- Parameters:
dim (int) – Number of channels.
- __init__(dim)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.layers.Conv1dBlock¶
Bases:
ModuleA convolutional block applying Conv1d -> GroupNorm -> Mish.
- Parameters:
- __init__(inp_channels, out_channels, kernel_size, padding=None, n_groups=8)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.layers.ResidualTemporalBlock¶
Bases:
ModuleA residual temporal block with conditioning for diffusion models.
- Parameters:
- __init__(inp_channels, out_channels, cond_embed_dim, n_support_points, kernel_size=5)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, c)¶
x : [ batch_size x inp_channels x n_support_points ] c : [ batch_size x embed_dim ] returns: out : [ batch_size x out_channels x n_support_points ]
- campd.architectures.layers.layers.group_norm_n_groups(n_channels, target_n_groups=8)¶
Safely computes the number of groups for GroupNorm based on channels.
Finds a valid number of groups (divisible by n_channels) close to target.
Attention¶
- class campd.architectures.layers.attention.GEGLU¶
Bases:
Module- __init__(dim_in, dim_out)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.attention.FeedForward¶
Bases:
Module- __init__(dim, dim_out=None, mult=4, glu=False, dropout=0.0)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- campd.architectures.layers.attention.zero_module(module)¶
Zero out the parameters of a module and return it.
- campd.architectures.layers.attention.Normalize(in_channels)¶
- class campd.architectures.layers.attention.CrossAttention¶
Bases:
ModuleCross-attention implemented with PyTorch SDPA. This will use FlashAttention / mem-efficient kernels on CUDA when available.
- __init__(query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, context=None, mask=None)¶
x: (b, n, query_dim) context: (b, m, context_dim) or None -> self-attn mask: (b, m) boolean where True means “this key position is valid”
- class campd.architectures.layers.attention.BasicTransformerBlock¶
Bases:
Module- __init__(dim, n_heads, d_head, dropout=0.0, context_dim=None, gated_ff=True, checkpoint=True)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, context=None, mask=None)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class campd.architectures.layers.attention.SpatialTransformer¶
Bases:
ModuleTransformer block for trajectory-like data. First, project the input (aka embedding) and reshape to b, t, d. Then apply standard transformer action. Finally, reshape to trajectory
- __init__(in_channels, n_heads, d_head, depth=1, dropout=0.0, context_dim=None)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, context=None, mask=None)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Layer Utilities¶
- campd.architectures.layers.utils.prob_mask_like(shape, prob, device)¶
- campd.architectures.layers.utils.exists(val)¶
- campd.architectures.layers.utils.uniq(arr)¶
- campd.architectures.layers.utils.default(val, d)¶
- campd.architectures.layers.utils.max_neg_value(t)¶
- campd.architectures.layers.utils.init_(tensor)¶