mqbench.nn.intrinsic.qat.modules package

Submodules

mqbench.nn.intrinsic.qat.modules.linear_fused

class mqbench.nn.intrinsic.qat.modules.linear_fused.LinearBn1d(in_features, out_features, bias, eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)[source]

Bases: Linear, _FusedModule

extra_repr()[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

freeze_bn_stats()[source]
classmethod from_float(mod)[source]

Create a qat module from a float module or qparams_dict

Args: mod a float module, either produced by torch.quantization utilities or directly from user

in_features: int
out_features: int
reset_bn_parameters()[source]
reset_parameters()[source]
reset_running_stats()[source]
train(mode=True)[source]

Batchnorm’s training behavior is using the self.training flag. Prevent changing it if BN is frozen. This makes sure that calling model.train() on a model with a frozen BN will behave properly.

update_bn_stats()[source]
weight: Tensor

Module contents