MLP2Layers

class cl_gym.backbones.MLP2Layers(multi_head=False, num_classes_per_head=None, input_dim=784, hidden_dim_1=256, hidden_dim_2=256, output_dim=10, dropout_prob=0.0, activation='ReLU', bias=True, include_final_layer_act=False)[source]

Bases: cl_gym.backbones.base.ContinualBackbone

MLP model (feed-forward) with two hidden layers.

blocks: Union[Iterable[nn.Module], nn.ModuleList]
forward(inp: torch.Tensor, head_ids: Optional[Iterable] = None)torch.Tensor[source]
Parameters
  • inp – The input of shape [BatchSize x input_dim]

  • head_ids – Optional iterable (e.g., List or 1-D Tensor) object of shape [BatchSize] includes head_ids.

Returns

The forward-pass output. Shape: [BatchSize x output_dim]

Return type

output

Note: the head_id will only be used if the backbone is initiated with multi_head = True.

get_block_grads(block_id: int)Dict[str, Optional[torch.Tensor]][source]
get_block_outputs(inp: torch.Tensor, block_id: int, pre_act: bool = False)[source]
get_block_params(block_id: int)Dict[str, torch.Tensor][source]
Parameters

block_id – the block number. In this case, layer.

Returns

a dictionary of form {‘weight’: weight_params, ‘bias’: bias_params}

Return type

params

multi_head: bool
num_classes_per_head: int
training: bool