mqbench.utils package

Submodules

mqbench.utils.logger

mqbench.utils.logger.disable_logging()[source]
mqbench.utils.logger.set_log_level(level)[source]

mqbench.utils.registry

mqbench.utils.registry.register_convert_function(module_type)[source]
mqbench.utils.registry.register_deploy_function(backend_type)[source]
mqbench.utils.registry.register_model_quantizer(backend_type)[source]
mqbench.utils.registry.register_weight_equalization_function(layer1, layer2)[source]

mqbench.utils.state

mqbench.utils.state.disable_all(model)[source]
mqbench.utils.state.enable_all(model)[source]

Enable calibration and quantization for every iter, means min / max can be updated while training. Use for QAT but can not set range.

mqbench.utils.state.enable_calibration(model)[source]
mqbench.utils.state.enable_calibration_quantization(model, quantizer_type='fake_quant')[source]
mqbench.utils.state.enable_calibration_woquantization(model, quantizer_type='fake_quant')[source]
mqbench.utils.state.enable_quantization(model, weight_cali_on=False, act_cali_on=False)[source]

We enable all quantization for quantization aware training. But we sometimes remain weight calibration on for update minmax all along. For some hardware, there is no weight quant param to be set, which mean it will calculate min / max for weight. Assume weight scale * 127 > abs(weight).max() after some training. Training scale and deploy scale can be various, so we have to update range every iter.

mqbench.utils.utils

mqbench.utils.utils.deepcopy_graphmodule(gm: GraphModule)[source]

Rewrite the deepcopy of GraphModule. (Copy its ‘graph’.)

Parameters:

gm (GraphModule) –

Returns:

A deepcopied gm.

Return type:

GraphModule

mqbench.utils.utils.deepcopy_mixedmodule(mm: Module, module_list: list)[source]

Support for module_list which splits modules’ nn part and post precess.

Parameters:
  • mm (nn.Module) –

  • module_list (list) – the children of the mm who are a GraphModule.

Returns:

nn.Module

mqbench.utils.utils.getitem2node(model: GraphModule) dict[source]
mqbench.utils.utils.is_symmetric_quant(qscheme: qscheme) bool[source]
mqbench.utils.utils.is_tracing_state()[source]
class mqbench.utils.utils.no_jit_trace[source]

Bases: object

mqbench.utils.utils.pot_quantization(tensor: Tensor, mode='round')[source]
mqbench.utils.utils.sync_tensor(tensor)[source]
mqbench.utils.utils.topology_order(model)[source]

Module contents