Decomposition Example
Procedure
- Go to the amct_tf/sample/tensor_decompose directory and execute the script for decomposing the original model:
python3.7.5 decompose_sample.py --meta_path META_PATH --ckpt_path CKPT_PATH --save_path SAVE_PATH
Table 4-28 describes the command-line options.
Table 4-28 Command-line optionsOption
Description
--meta_path META_PATH
(Required) Directory of the TensorFlow model file (.meta).
--ckpt_path CKPT_PATH
(Required) Directory of the TensorFlow model weight files, including .data-0000X-of-0000X and .index files. The directory does not contain the file name extensions.
--save_path SAVE_PATH
(Required) Directory of the result files after tensor decomposition, which can be relative or absolute. If the specified directory does not exist, the directory is automatically created.
The directory must be prefixed with the name of the result model, for example, tmp/model_decomposition.
An example is as follows:
python3.7.5 decompose_sample.py --meta_path checkpoints/model.ckpt-200.meta --ckpt_path checkpoints/model.ckpt-200 --save_path tmp/model_decomposition
During the decomposition, the names of the decomposition-capable operators and the names of the decomposed operators are recorded in the log. The following log messages are examples only.
[AMCT]:[AMCT]: Processing conv2d_1/Conv2D [AMCT]:[AMCT]: Decompose conv2d_1/Conv2D -> ['conv2d_1/Conv2D/decom_first/decom_first', 'conv2d_1/Conv2D/decom_core/decom_core', 'conv2d_1/Conv2D/decom_last/decom_last'] ...
If messages similar to the following are displayed, the decomposition is successful:
auto_decomposition complete!
- After the decomposition is complete, the result model is generated in the path specified by the --save_path option.
-rw-r--r-- 1 amct amct 95 Jul 28 04:49 checkpoint //Checkpoint when the result model file is generated -rw-r--r-- 1 amct amct 10846856 Jul 28 04:49 model_decomposition.data-00000-of-00001 //Weight file of the result model -rw-r--r-- 1 amct amct 967 Jul 28 04:49 model_decomposition.index //Weight file index of the result model -rw-r--r-- 1 amct amct 315800 Jul 28 04:49 model_decomposition.meta //result model file -rw-r--r-- 1 amct amct 517 Jul 28 04:49 model_decomposition.pkl //Modificatoins made to the graph structure, used by the decompose_graph API
Fine-tuning
In normal cases, the accuracy of a decomposed model is lower than that of the original model. Therefore, fine-tuning is introduced to improve the accuracy of the decomposed model. Decrease the learning rate from about 0.1 times of the original learning rate. The number of epochs varies with models. The more convolutional layers are decomposed, the more epochs are required. The fine-tuned model can have accuracy improvement. However, it is also possible that the accuracy remains or even drops.
- Prerequisites
In this section, the decompose_graph API is called and then inserted into the training code to fine-tune the decomposed model, Ensure that the model and weight are obtained based on the exact training code. The sample/tensor_decompose directory provides two copies of TensorFlow training code, one using the Session API and the other using the Estimator API. You can select one that best suits your needs.
- Example
Insert the training code by calling the decompose_graph API after the model is built and before the optimizer is built. For details, see finetune_sample_session.py and finetune_sample_estimator.py. After this API call, load the weight of the decomposed model for fine-tuning. The CKPT file generated after decomposition may contain the optimizer parameters of the model before decomposition, which do not match the decomposed model. Therefore, load only the weight of the decomposed model, rather than the optimizer parameters. For details, see finetune_sample_session.py and finetune_sample_estimator.py.
- Go to the amct_tf/sample/tensor_decompose directory and run the following command to fine-tune the model:
python3.7.5 finetune_sample_session.py --data_path DATA_PATH --save_path SAVE_PATH
Alternatively,
python3.7.5 finetune_sample_estimator.py --data_path DATA_PATH --save_path SAVE_PATH
Table 4-29 describes the command-line options.
Table 4-29 Command-line optionsOption
Description
--data_path DATA_PATH
(Required) Directory of the MNIST dataset. See Prerequisites.
--save_path SAVE_PATH
(Required) Directory of the result files generated by the auto_decomposition call.
An example is as follows:
python3.7.5 finetune_sample_session.py --data_path data/mnist --save_path tmp/model_decomposition
- If the script used for generating the model file and weight files in Prerequisites is train_sample_session.py, use the finetune_sample_session.py fine-tuning script .
- If the script used for generating the model file and weight files in Prerequisites is train_sample_estimator.py, use the finetune_sample_estimator.py fine-tuning script .
If messages similar to the following are displayed, the execution is successful:Valid Accuracy: 0.9838 // Accuracy based on the MNIST dataset
- After fine-tuning is complete, the finetuned_ckpt result file is automatically generated in the sample/tensor_decompose directory that stores the model file and weight files of the decomposed model. The following is an example.
-rw-r--r-- 1 amct amct 128 Aug 1 04:14 checkpoint Index of the weight file after -rw-r--r-- 1 amct amct 10093192 Aug 1 04:14 model.ckpt-100.data-00000-of-00001 //Weight file index of the fine-tuned model -rw-r--r-- 1 amct amct 1144 Aug 1 04:14 model.ckpt-100.index //Weight file of the fine-tuned model -rw-r--r-- 1 amct amct 122783 Aug 1 04:14 model.ckpt-100.meta //Fine-tuned model -rw-r--r-- 1 amct amct 3171658 Aug 1 04:14 model.pb //.pb file of the fine-tuned model
For details about the files generated after networks from more open-source frameworks are decomposed, see Tensor Decomposition Specification Reference of Open-Source Networks.
- For details about how to quantize a model in PB format, see Calibration-based Quantization.
- Go to the amct_tf/sample/tensor_decompose directory and run the following command to fine-tune the model: