Operator Development in Command Line
Refer to Add operator development using TBE DSL described in Sample Reference to have an overview of the flow of TBE custom operator development.
Figure 5-2 shows the workflow of developing a TBE custom operator in the CLI.
Table 5-3 describes the development workflow.
Step |
Description |
See Also |
---|---|---|
Operator analysis |
Before developing an operator, you need to analyze the operator, specify the operator function, inputs, and outputs, select an operator development mode, and name the operator type and operator implementation function. |
|
Project creation |
In the CLI, you can create a custom operator project by using one of the following methods:
Develop and store operator deliverables according to the following rules:
NOTICE:
If you need to customize multiple TBE operators and implement them in the same operator project, you are advised to store the implementation files according to the preceding rules. |
|
Operator implementation |
Operator code implementation: implements operator compute logic and scheduling |
|
Operator plug-in implementation: required in custom operator development on a third-party framework (such as TensorFlow). With the custom operator implementation code delivered, you also need to develop a plug-in to map the TensorFlow operator to an operator that adapts to the Ascend AI Processor. |
||
Operator prototype definition: defines the constraints on the operator to run on the Ascend AI Processor, mainly the mathematical meanings of the operator by defining operator inputs, outputs, attributes, and their value ranges, verifying arguments, and inferring shape. The information defined by the prototype is registered with the operator prototype library of GE. During offline model conversion, GE calls the verification API of the operator prototype library to verify operator arguments. If the verification passes, GE infers the output shape and dtype of each node by calling the inference function of the operator prototype library and allocates static memory for the result tensor. |
||
Operator information definition: registers the operator information to the operator information library, including the input and output dtype and format, and input shape of the operator. During offline model conversion, based on the operator information in the operator information library, FE performs basic verification, inserts conversion nodes for the operator as required, and finds the operator implementation code to build the operator binary file. |
||
Operator build and deployment |
In the CLI, you can use the build file of the sample project for one-click compilation, generating a custom operator installation package. Specify the opp path and execute the installation package to deploy the custom operator. |
|
Verification by network execution |
Constructs a model file that contains only a single operator, and uses the AscendCL to load the model file to verify the functions of the single operator. |