No relevant resource is found in the selected language.
Your browser version is too early. Some functions of the website may be unavailable. To obtain better user experience, upgrade the browser to the latest version.
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document.
Note: Even the most advanced machine translation cannot match the quality of professional translators.
Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Restrictions and Limitations
Restrictions and Limitations
The compatible TensorFlow version is 1.15. For details about the support for TensorFlow APIs, see Available TensorFlow APIs.
In the infershape phase, operators do not support unknown shape inference.
Currently, the system supports the following formats: NCHW, NHWC, NC, HWCN, and CN.
If manual mixed precision has been implemented in the original script (for example, explicitly calling the cast operator for precision conversion), the system preferentially retains the source image precision by default. That is, when the operator does not support the float32 data type, the precision is reduced to float16.
For condition branches and loop branches, only TF.condition and TF.whileloop are supported.
In training with multiple devices, NPURunconfig does not support save_checkpoints_secs.
In training with multiple devices, it is not allowed to save the summary information of only a single device.
If the value of iterations_per_loop is greater than 1, the value of save_checkpoints_steps must be a positive integer multiple of iterations_per_loop; otherwise, checkpoint data is not saved as defined by save_checkpoints_steps. If the value of iterations_per_loop is greater than 1, data is not saved as defined by save_summary_steps and log_step_count_steps. For details, see Log and Summary Operators.
You are advised to replace dropout and gelu in the original network with the corresponding AscendCL APIs to achieve better performance.
Only the Summary, Log, and Data operators support the string data type.
Operators do not support Inf or NaN inputs.
The restrictions on collective communication are as follows:
The graphs executed on different devices in a training job must be the same.
Allocation at only 1, 2, 4, or 8 Ascend AI Processors per server is supported.
Collective communication supports only the int8, int32, float16, and float32 data types.
The restrictions on data preprocessing are as follows:
Only tf.data.make_initializable_iterator can be used to offload the computation of the getnext operator.
The value of drop_remainder of BatchDataset must be set to True. (Note: During inference, if the inference data volume of the last iteration is less than the batch size, empty data is added to reach the batch size.)