Graph Converters¶
- class nnabla.experimental.graph_converters.GraphConverter(modifiers=[])[source]¶
Convert a graph with the modifiers by traversing from output variables.
- convert(o)[source]¶
- Parameters
o (list of
nnabla.Variable) – Output variables.
- class nnabla.experimental.graph_converters.FunctionModifier[source]¶
Base class of modifiers.
The
modifymethod is called for a function with inputs in a graph topological order when you call the GraphConverter(<modifiers>).convert(<root variable>) method.- finish_up()[source]¶
Finish the very time function modification.
Clean up the internal modifier states.
- Parameters
None –
- Returns
None
- get_parameter_scope(v)[source]¶
Get the parameter name corresponding to v
- Parameters
v (
nnabla.Variable) – NNabla Variable Object.- Returns
Scope name
- Return type
- modify(f, inputs)[source]¶
Modify the function.
Implement this method in a sub class to modify a function.
Examples:
class ReLUToLeakyReLUModifier(FunctionModifier): def __init__(self): super(ReLUToLeakyReLUModifier, self).__init__() def modify(self, f, inputs): if f.info.type_name == 'ReLU': x = inputs[0] return F.leaky_relu(x)
This examples is a simple case since the network topological order does not change. In GraphConverter, we expect the modify method is called along the original network tolopogical order not the modified order. In such a complex case, see themodify method of
BatchNormalizationFoldingModifierInneras a reference.- Parameters
f (
nnabla.function.Function) – NNabla function object.inputs (list of
Variable) – New inputs tof. This may be modified one or the same as f.inputs.
- Returns
Variableor list ofVariable.
Function Modifiers¶
- class nnabla.experimental.graph_converters.BatchNormalizationFoldingModifier(opposite=False, channel_last=False)[source]¶
Single
Convolution -> BatchNormalizationpass is folded into oneConvolution.If there is a
Convolution -> BatchNormalizationpass, fold the batch normalization parameters to the kernel and bias (if it exists) of the preceding convolution, then skip the batch normalization following the convolution.Supported folding functions:
Convolution,Deconvolution,Affine.Examples:
pred = Model(...) import nnabla.experimental.graph_converters as GC modifiers = [GC.BatchNormalizationFoldingModifier()] gc = GC.GraphConverter(modifiers) pred = gc.convert(pred)
- class nnabla.experimental.graph_converters.BatchNormalizationFoldingModifierInner(channel_last=False)[source]¶
Single
Convolution -> BatchNormalizationpass is folded into oneConvolution.If there is a
Convolution -> BatchNormalizationpass, fold the batch normalization parameters to the kernel and bias (if it exists) of the preceding convolution, then skip the batch normalization following the convolution.Supported folding functions:
Convolution,Deconvolution,Affine.
- class nnabla.experimental.graph_converters.AddBiasModifier[source]¶
Add bias to
Convolutionin BatchNormalization folding case if it doesn’t have bias.Supported folding functions:
Convolution,Deconvolution,Affine.Examples:
pred = Model(...) import nnabla.experimental.graph_converters as GC modifiers = [GC.AddBiasModifier()] gc = GC.GraphConverter(modifiers) pred = gc.convert(pred)
- class nnabla.experimental.graph_converters.BatchNormalizationSelfFoldingModifier(name='bn-self-folding')[source]¶
The parameters of the batch normalization replaced simple scale and bias.
- Parameters
name (
str) – Prefix of the parameter scope.
Examples:
pred = Model(...) import nnabla.experimental.graph_converters as GC modifiers = [GC.BatchNormalizationSelfFoldingModifier()] gc = GC.GraphConverter(modifiers) pred = gc.convert(pred)
- class nnabla.experimental.graph_converters.ChannelFirstModifier(inputs, inputs_cf=None)[source]¶
Convert graph shape from Channel last (NHWC) to Channel first (NCHW) format.
Supported functions:
Convolution,Deconvolution,BatchNormalization,MaxPooling,AveragePooling,SumPooling,Unpooling,Concatenate- Parameters
inputs (list of nn.Variable) – Original channel last version of very begining inputs (NHWC) of a network.
inputs_cf (list of nn.Variable) – Channel first version of very begining inputs (NCHW) of a network. If this is not given,
inputs_cfare generated internally and holded.
Examples:
pred = Model(...) import nnabla.experimental.graph_converters as GC modifiers = [GC.ChannelFirstModifier(<inputs of pred>)] gc = GC.GraphConverter(modifiers) pred = gc.convert(pred)
- class nnabla.experimental.graph_converters.ChannelLastModifier(inputs, inputs_cl=None)[source]¶
Convert graph shape from Channel first (NCHW) to Channel last (NHWC) format.
Supported functions:
Convolution,Deconvolution,BatchNormalization,MaxPooling,AveragePooling,SumPooling,Unpooling,Concatenate- Parameters
inputs (list of nn.Variable) – Original very begining inputs (NCHW) of a network.
inputs_cl (list of nn.Variable) – Channel last version of very begining inputs (NHWC) of a network. If this is not given,
inputs_clare generated internally and holded.
Examples:
pred = Model(...) import nnabla.experimental.graph_converters as GC modifiers = [GC.ChannelLastModifier(<inputs of pred>)] gc = GC.GraphConverter(modifiers) pred = gc.convert(pred)
- class nnabla.experimental.graph_converters.FusedBatchNormalizationModifier[source]¶
Block
BatchNormalization -> Add2 -> Non-Linearpass is fused into oneFusedBatchNormalization.If there is a block
BatchNormalization -> Add2 -> Non-Linearpass, remove all the block functions and replace the whole block toFusedBatchNormalization.Examples:
pred = Model(...) import nnabla.experimental.graph_converters as GC modifiers = [GC.FusedBatchNormalizationModifier()] gc = GC.GraphConverter(modifiers) pred = gc.convert(pred)
- class nnabla.experimental.graph_converters.UnfusedBatchNormalizationModifier[source]¶
Unfuse
FusedBatchNormalizationtoBatchNormalization -> Add2 -> Non-Linearblock.If there is a
FusedBatchNormalizationpass, remove the fused batch normalization and replace it with the blockBatchNormalization -> Add2 -> Non-Linear.Supported Non-Linear functions:
reluExamples:
pred = Model(...) import nnabla.experimental.graph_converters as GC modifiers = [GC.UnfusedBatchNormalizationModifier()] gc = GC.GraphConverter(modifiers) pred = gc.convert(pred)
- class nnabla.experimental.graph_converters.RemoveFunctionModifier(rm_funcs=[])[source]¶
Remove specified function layer(s) from a graph.
A convenient converter when one or more functions in an existing graph needs to be removed. This converter remove specified function(s) without recreating a new graph from scratch.
- Parameters
rm_funcs (list of
str) – list of function name
Examples:
pred = Model(...) import nnabla.experimental.graph_converters as GC modifiers = [GC.RemoveFunctionModifier(rm_funcs=['BatchNormalization', 'MulScalar'])] gc = GC.GraphConverter(modifiers) pred = gc.convert(pred)