Operator Development Flow
Refer to Add operator development using TBE DSL described in Sample Reference to have an overview of the flow of TBE custom operator development.
Figure 4-1 shows the flow of developing a TBE custom operator in the CLI scenario.
Table 4-1 describes the development flow.
Step |
Description |
See Also |
---|---|---|
Operator analysis |
Before developing an operator, you need to analyze the operator, specify the operator function, input, and output, select the operator development mode, and plan the names of the operator type and operator implementation function. |
|
Project creation |
In CIL mode, you are advised to modify the provided samples of custom operators. The sample is stored in the atc/sample/op/ directory of the ATC installation package.
Develop and store operator deliverables according to the following rules:
Note: If you need to customize multiple TBE operators and implement them in the same operator project, you are advised to store the implementation files based on the preceding rules. |
For details about the project directory structure in CIL mode, see Operator Project Structure. |
Operator implementation |
Operator code implementation of operator computation logic and scheduling |
|
Operator plug-in implementation: If a custom operator is developed based on a third-party framework (such as TensorFlow), after developing the custom operator implementation code, you need to develop a plug-in to map TensorFlow-based operators to operators that adapt to the Ascend AI Processor. |
||
Operator prototype file: Defines the constraints on operators that can run on the Ascend AI Processor, including the operator input, output, attributes, value ranges, argument verification, and shape inference. The definition information is registered with the operator prototype library of GE. During offline model conversion, GE calls the verification API of the operator prototype library to verify operator arguments. If the verification passes, GE infers the output shape and dtype of each node by calling the inference function of the operator prototype library and allocates static memory for the result tensor. |
||
Operator information file: The operator information configuration file is used to register the operator information to the operator information library, including the input and output dtype, format, and input shape information of the operator. During offline model conversion, based on the operator information in the operator information library, FE performs basic verification, inserts conversion nodes for the operator as required, and finds the operator implementation code to build the operator binary file. |
||
Operator building and deployment |
In the CLI scenario, you can use the compilation file of the sample project for one-click compilation, generating a custom operator installation package. Specify the opp path and execute the installation package to deploy the custom operator. |
|
Operation Verification on Network |
Write a network test case, construct a TensorFlow network that contains an operator to be verified, and run the test case to check whether the operator running result on the network is correct. |