ContinualBackbone

class cl_gym.backbones.ContinualBackbone(multi_head: bool = False, num_classes_per_head: Optional[int] = None)[source]

Bases: torch.nn.modules.module.Module

Base class for a continual backbone. Currently, this is a simple wrapper around PyTorch’s nn.Module to support multiple heads.

forward(inp: torch.Tensor, head_ids: Optional[Iterable] = None)torch.Tensor[source]

Performs forward-pass

Parameters
  • inp – The input tensor for forward-pass. size: [BatchSize x …]

  • head_ids – Optional list of classifier head ids. Size [BatchSize]

Returns

Pytorch tensor of size [BatchSize x …]

Return type

output

. note::

The head_ids will only be used if the backbone is multi-head.

get_block_grads(block_id: int)torch.Tensor[source]
get_block_outputs(inp: torch.Tensor, block_id: int, pre_act: bool = False)torch.Tensor[source]
get_block_params(block_id: int)Dict[str, torch.Tensor][source]
Parameters

block_id – given the block number, provides the parameters.

Returns

a dictionary of format {‘param_name’: params}

Return type

output

. note::

a block can have several layers (e.g., ResNet), or consist different parameters. For instance, the default Linear block has `

select_output_head(output, head_ids: Iterable)torch.Tensor[source]

Helper method for selecting task-specific head.

Parameters
  • output – The output of forward-pass. Shape: [BatchSize x …]

  • head_ids – head_ids for each example. Shape [BatchSize]

Returns

The output where for each example in batch is calculated from one head in head_ids.

Return type

output

training: bool