Operator Deliverable Generator
Function
For a TensorFlow operator, the operator deliverables can be directly generated from the command line based on the TensorFlow prototype definition or the IR prototype definition of the operator adapted to the Ascend AI Processor, including the operator code implementation file, operator adaptation plug-in, operator prototype definition, operator information definition, and build template.
For a Caffe operator, the operator deliverables can be directly generated from the command line based on the IR prototype definition of the operator adapted to the Ascend AI Processor, including the operator code implementation file, operator adaptation plug-in, operator prototype definition, operator information definition, and build template.
For a PyTorch operator, the operator deliverables can be directly generated from the command line based on the IR prototype definition of the operator adapted to the Ascend AI Processor, including the operator code implementation file, operator prototype definition, operator information definition, and build template.
Tool Preparation
Find the tool in toolkit/python/site-packages/bin/msopgen in the Toolkit installation path.
Prerequisites
- msopgen depends on the Python library xlrd. Before using this tool, run the following command to install the xlrd library:
pip3.7.5 install xlrd
- Set the following environment variables:
export install_path=/home/HwHiAiUser/Ascend/ascend-toolkit/latest export PYTHONPATH=${install_path}/toolkit/python/site-packages:$PYTHONPATH
Replace install_path with the actual Toolkit installation path.
Command Syntax
Run the following command in the tool directory:
python3.7.5 msopgen gen -i {operator define file} -f {framework type} -c {Compute Unit} -out {Output Path} -m {0,1} -op {Operator Type}
Alternatively,
./msopgen gen -i {operator define file} -f {framework type} -c {Compute Unit} -out {Output Path} -m {0,1} -op {Operator Type}
Command-line Options
Option |
Description |
Required |
---|---|---|
{gen,mi} |
Scenario select:
|
Yes |
-i |
Tool input, that is, the path of the operator definition file. The path can be an absolute path or a path relative to the path where the tool is executed. The user who executes the tool must have the read and write permissions on the path. The following two types of operator definition files are supported:
For details about the file content, see Example op_define Definition Files. |
Yes |
-f |
Framework type:
|
Yes |
-c |
Compute unit used by the operator, formatted as {Core Type}-{SoCVersion}. Use a hyphen (-) to connect {Core Type} and {SoC Version}. Replace the parameters with the actual Ascend AI Processor version.
|
Yes |
-out |
Generation output path. The path can be an absolute path or a path relative to the path where the tool is executed. The path must be an existing path, and the tool execution user have the read and write permissions on the path. |
Yes |
-m |
Deliverable generation mode:
Default value: 0 |
Not |
-op |
Applies to the scenario where -i is set to a TBE operator definition IR file (.xlsx). Type of the operator defined on the "Op" sheet. If this parameter is not set, the tool prompts you to select an operator when there are multiple operators on the "Op" sheet. |
Not |
Example
Run the following command in the tool directory to generate the deliverables of your custom operator:
- Use the TensorFlow prototype definition when the source framework is TensorFlow:
python3.7.5 msopgen gen -i op_define/tf_op.txt -f tf -c ai_core-ascend310 -out ./output_data
- Use the TBE operator IR definition when the source framework is Caffe:
python3.7.5 msopgen gen -i op_define/Ascend_IR_Template.xlsx -f caffe -c ai_core-ascend310 -out ./output_data
After the command is executed, the following custom operator deliverables are generated in the output_data folder of the current directory.
├── build.sh //Entry script for building an operator project
├── cmake
│ ├── config.cmake
│ ├── util //Directory that stores the scripts used for operator project compilation and common build files
├── CMakeLists.txt //CMakeList.txt of the operator project
├── custom.proto //Proto file of the Caffe custom operator
├── framework //Directory of the operator plug-in implementation file
│ ├── CMakeLists.txt
│ ├── caffe_plugin //Directory of the generated operator adaptation plug-in code when the source framework is Caffe
│ └── caffe_xx_plugin.cpp //Implementation file of the operator adaptation plug-in
│ └── CMakeLists.txt
│ ├── tf_plugin //Directory of the generated operator adaptation plug-in code when the source framework is TensorFlow
│ └── tensorflow_xx_plugin.cpp //Implementation file of the operator adaptation plug-in
│ └── CMakeLists.txt
├── op_proto //Directory of the operator prototype definition file and the CMakeList file
│ ├── xx.h
│ ├── xx.cpp
│ ├── CMakeLists.txt
├── tbe
│ ├── CmakeLists.txt
│ ├── impl //Directory of the operator code implementation files
│ └── xx.py //Operator code implementation file
│ ├── op_info_cfg //Directory for storing operator information library files
│ └── ai_core
│ ├── {Soc Version} //Ascend AI Processor version
│ ├── xx.ini //Operator information definition file
├── scripts //Scripts used for custom operator project packaging
If you need to add other custom operator to the project directory, add the -m 1 option to the tool execution command:
python3.7.5 msopgen gen -i op_define/tf_op_2.txt -f tf -c ai_core-ascend310 -out ./output_data -m 1
- The preceding operator code implementation file, operator adaptation plug-in implementation file, operator prototype definition file, and operator information definition file are only templates generated for the sample custom operator project. The purpose is to reduce the code development workload of developers and simplify the development workflow. Before building the sample custom operator project, modify the deliverables by referring to Operator Code Implementation, Operator Plug-in Implementation, Operator Prototype Definition and Operator Information Library Definition.
- After the tool is executed, the generated CMakeLists.txt build script applies to all TensorFlow custom operator projects. No manul tweaking is required.
- You can build your custom operator project by referring to Operator Project Build Workflow to generate a custom OPP runfile.
After the build is complete, you can deploy the custom OPP runfile by referring to Operator Deployment.
Operator Project Build Workflow
- In the op/all/custom.proto file of the custom operator project, add the definition of the Caffe custom operator.
The custom.proto file is as follows:
syntax = "proto2"; package domi.caffe; message NetParameter { optional string name = 1; // LayerParameter definition. Retain the default definition. repeated LayerParameter layer = 100; // ID 100 so layers are printed last. } message LayerParameter { optional string name = 1; // Definition for model parsing. Retain the default definition. optional string type = 2; // Definition for model parsing. Retain the default definition. // Add the definition of the custom operator layer to LayerParameter. The ID must be unique in the built-in caffe.proto file and must be less than 5000. // The built-in caffe.proto file is stored in atc/include/proto/caffe.proto in the ATC installation path. optional CustomTestParameter custom_test_param = 1000; } // Add the definition of the custom layer. message CustomTestParameter { optional bool adj_x1 = 1 [default = false]; optional bool adj_x2 = 2 [default = false]; }
- You are advised to keep the parameter type (in bold and italic) unique and not the same as that defined in the built-in caffe.proto file in the atc/include/proto directory.
- The custom.proto file in the sample code contains the definition of the custom Caffe operator. If there are other custom operators, add their definitions to this file.
- Configure the environment variables and build settings in the build.sh script based on the actual development environment.Configure the following environment variables in the header of the build.sh script:
- ASCEND_TENSOR_COMPLIER_INCLUDE: path of the ATC header files.
- If it is not set, the default path /usr/local/Ascend/atc/include is used.
- If the actual ATC path is not the default path, uncomment this environment variable and change it to the actual path of the ATC header files, for example:
export ASCEND_TENSOR_COMPLIER_INCLUDE=/home/HwHiAiUser/Ascend/ascend-toolkit/latest/atc/include
- SYSTEM_INFO: form name of the built OPP. If the environment variable SYSTEM_INFO is not configured, the value is automatically obtained based on the OS type and architecture.
If you need to customize the OPP name, cancel the comment of the environment variable and change it as required. For example, if the OS version is CentOS and the architecture is AArch64, set the environment variable as follows:
export SYSTEM_INFO=centos_aarch64
The built OPP name is custom_opp_centos_aarch64.run.
- ASCEND_TENSOR_COMPLIER_INCLUDE: path of the ATC header files.
- Build the operator project.
Run the following command in the op/all directory of the custom operator project to build the custom operator project:
./build.sh
After successful build, an OPP runfile custom_opp_<target os>_<target architecture>.run is generated in the build_out directory.
If you need to rebuild the project, run the ./build.sh clean command to clear the build outputs.
Example op_define Definition Files
- TensorFlow prototype definition
The file type is TXT. Developers can obtain the prototype definitions of TensorFlow operators from the TensorFlow Open-source Community.
For example, the prototype definition of Add Operator is as follows:
REGISTER_OP("Add") .Input("x: T") .Input("y: T") .Output("z: T") .Attr( "T: {bfloat16, half, float, double, uint8, int8, int16, int32, int64, " "complex64, complex128, string}") .SetShapeFn(shape_inference::BroadcastBinaryOpShapeFn);
You can save the preceding definitions as a .txt file.
Note: Each .txt file can contain the prototype definition of only one operator.
- IR prototype definition of the operator adapted to the Ascend AI Processor
You can modify the IR-defined template file in toolkit/tools/msopgen/template/Ascend_IR_Template.xlsx in the Toolkit installation path.
Modify the configuration based on the "Op" sheet. You can define multiple operators on the "Op" sheet. Each operator is defined with the following information:
Table 15-2 Parameters in IR prototype definitionColumn Header
Description
Required
Op
Operator type
Yes
Classify
Parameter classification:- Input
- Dynamic input: DYNAMIC_INPUT
- Output
- Dynamic output: DYNAMIC_OUTPUT
- Attribute: Attr
Yes
Name
Parameter name
Yes
Type
Parameter type.
Value range:
tensor, int, bool, float, ListInt, ListFloat, and more
Yes
TypeRange
Data types of a tensor parameter.
Value range:
fp16, fp32, double, int8, int16, int32, int64, uint8, uint16, uint32, uint64, bool, and more
Not
Required
Whether an input is required:
- TRUE
- FALSE
Yes
Doc
Parameter description
Not
Attr_Default_value
Default value of an attribute
Not
Format
Data format of a tensor parameter.
Value range:
ND, NHWC, NCHW, HWCN, NC1HWC0, FRACTAL_Z, and more.
Not
The following is a configuration example.
Table 15-3 Examples of IR prototype definitionThis Row Is Reserved.
Op
Classify
Name
Type
TypeRange
Required
Doc
Attr_Default_value
Format
Reshape
INPUT
x
tensor
fp16,fp32,double,int8,int16,int32,int64,uint8,uint16,uint32,uint64,bool
TRUE
-
-
ND
INPUT
shape
tensor
int32,int64
FALSE
-
-
-
DYNAMIC_OUTPUT
y
tensor
fp16,fp32,double,int8,int16,int32,int64,uint8,uint16,uint32,uint64,bool
FALSE
-
-
ND
ATTR
axis
int
-
FALSE
-
0
-
ATTR
num_axes
int
-
FALSE
-
-1
-
ReshapeD
INPUT
x
tensor
fp16,fp32,double,int8,int16,int32,int64,uint8,uint16,uint32,uint64,bool
TRUE
-
-
ND
OUTPUT
y
tensor
fp16,fp32,double,int8,int16,int32,int64,uint8,uint16,uint32,uint64,bool
TRUE
-
-
ND
ATTR
shape
list_int
-
FALSE
-
{}
-
ATTR
axis
int
-
FALSE
-
0
-
ATTR
num_axes
int
-
FALSE
-
-1
-
- Modify based on the first sheet "Op" of the template file to define your operator prototypes.
- Do not delete the first three rows and columns on the "Op" sheet.