Creating a Container Image
Prerequisites
- In the container scenario, you need to install Docker.
- Obtain the offline inference engine package and service inference program package by referring to Table 6-1. The user environment needs to connect to the network to pull images. If the network is not connected, see Configuring a System Network Proxy.
Table 6-1 Required software
Software Package
Description
How to Obtain
A500-3000-nnrt_{version}_linux-aarch64.run
Offline inference engine package
{version} indicates the software package version.
dockerfile
Required for creating an image
Prepared by users
Service inference program package
Collection of service inference programs. The .tar and .tgz formats are supported.
The compressed package format of the service inference program must be supported by the compression program in the container. In addition, the command for decompressing the service inference program package in install.sh must map the actual format.
Prepared by users
install.sh
Installation script of the service inference program.
run.sh
Script for running the service inference program.
For details about how to develop service programs, see the corresponding software development guide based on the firmware version.- If the firmware version in the operating environment is 2.2.XXX, see the Atlas 500 AI Edge Station 1.0.0 Application Software Development Guide (Models 3000, 3010).
- If the firmware version in the operating environment is 20.0.X, see the Atlas Intelligent Edge Solution V100R020C00 Application Software Development Guide.
- If the firmware version in the operating environment is 20.1.X, see the CANN V100R020C10 Application Software Development Guide.
Procedure
- Upload the software package to the same directory (for example, /home/test) on the edge station.
- A500-3000-nnrt_{version}_linux-aarch64.run
- Service inference program package
- Perform the following steps to create a Dockerfile:
- Log in to the edge station as the root user and run the id HwHiAiUser command to query and record the UID and GID of the HwHiAiUser user on the host.
- Go to the software package upload directory in 1 and run the following command to create a Dockerfile (for example, Dockerfile):
vi Dockerfile
- Write the following content and run the :wq command to save the content. The Ubuntu Arm OS is used as an example.
#OS and version number. Change them based on the site requirements. FROM ubuntu:18.04 # Set the parameters of the offline inference engine package. ARG NNRT_PKG # Set environment variables. ARG ASCEND_BASE=/usr/local/Ascend ENV LD_LIBRARY_PATH=\ $LD_LIBRARY_PATH:\ $ASCEND_BASE/nnrt/latest/acllib/lib64:\ /home/data/miniD/driver/lib64:\ /home/data/miniD/driver/add-ons # Set the directory of the started container. WORKDIR /root # Copy the offline inference engine package. COPY $NNRT_PKG . # Install the offline inference engine package. RUN umask 0022 && \ groupadd -g gid HwHiAiUser && useradd -g HwHiAiUser -d /home/HwHiAiUser -m HwHiAiUser && usermod -u uid HwHiAiUser &&\ chmod +x ${NNRT_PKG} &&\ ./${NNRT_PKG} --quiet --install &&\ rm ${NNRT_PKG} # Copy the service inference program package, installation script, and running script. ARG DIST_PKG COPY $DIST_PKG . COPY install.sh . COPY run.sh /usr/local/bin/ #Run the installation script. RUN chmod +x /usr/local/bin/run.sh && \ sh install.sh && \ rm $DIST_PKG && \ rm install.sh #Program that is run by default when the container is started. CMD run.sh
In Dockerfile:
groupadd -g gid HwHiAiUser && useradd -g HwHiAiUser -d /home/HwHiAiUser -m HwHiAiUser && usermod -u uid HwHiAiUser &&\
Create the HwHiAiUser user in the container. gid and uid in the file indicate the UID and GID of the HwHiAiUser user on the host. You can replace them as required according t 2.a. The UID and GID of the HwHiAiUser user in the container must be the same as those on the host.
In this document, the driver running user is HwHiAiUser. If you specify another user as the driver running user, change the user name to the actual one.
- After creating the Dockerfile, run the following command to change the permission on the Dockerfile:
chmod 600 Dockerfile
- The procedure for preparing the install.sh and run.sh scripts is the same as that for preparing the Dockerfile. Compilation Sample shows the file content.
- Go to the directory where the software package is stored and run the following command to create a container image:
docker build -t image-name --build-arg NNRT_PKG=nnrt-name --build-arg DIST_PKG=distpackage-name .
Do not omit . at the end of the command. Table 6-2 describes the parameters in the command.
Table 6-2 Command parameter descriptionParameter
Description
image-name
You can set the image name and tag as required.
--build-arg
Specifies parameters in the Dockerfile.
NNRT_PKG
nnrt-name: specifies the name of the offline inference engine package. Do not omit the file name extension. Replace it with the actual one.
DIST_PKG
distpackage-name: specifies the name of the compressed package of the service inference program. Do not omit the file name extension. Replace it with the actual one.
If "Successfully built xxx" is displayed, the image is successfully created.
- After the image is created, run the following command to view the image information:
docker images
Example:
REPOSITORY TAG IMAGE ID CREATED SIZE workload-image v1.0 1372d2961ed2 About an hour ago 249MB
Compilation Sample
Example of compiling install.sh
#!/bin/bash # Go to the container working directory. cd /root #Decompress the service inference program package based on the package format. tar xf dist.tar # Create log-related files. mkdir -p /usr/slog mkdir -p /var/log/npu/slog/slogd chown -Rf HwHiAiUser:HwHiAiUser /usr/slog chown -Rf HwHiAiUser:HwHiAiUser /var/log/npu/slog
Compilation example of run.sh
#!/bin/bash # Start the slogd daemon process. /home/data/miniD/driver/tools/slogd & ps -ef | grep -v grep | grep "tools/slogd" # Access the directory where the executable file of the service inference program is located. cd /root/dist #Run the executable file. ./main
If the OS in the container is Ubuntu ARM 18.04, add the following content to install.sh:
# Create a soft link for the slogd daemon process. mkdir /lib64 cd /lib64 ln -sf /lib/ld-linux-aarch64.so.1 ld-linux-aarch64.so.1