No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionStorage V100R006C20 Block Storage Service Software Installation Guide 07

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Installing a CVM

Installing a CVM

Scenarios

When connecting FusionStorage Block with Microsoft Hyper-V, create a control VM (CVM) to deploy FSA. If the server is used only as a storage node, deploy FSA in the operating system of the server.

This section describes how to install the control VM (CVM) of FusionStorage Block on the Microsoft Hyper-V compute node.In separated deployment mode, the VBS process is deployed on the CVM, and the OSD process is deployed on the storage node.

The following describes how to create and install a CVM in each compute node.

Prerequisites

Conditions

The requirements listed in System Requirements have been fulfilled.

Data

No data is required for performing this operation.

Software

FusionStorage Block V100R006C20SPC200_tool.zip

Procedure

    Create a CVM.

    1. Log in to a computing node where an CVM VM is to be created, search for and open Hyper-V Manager, right-click a computing node for creation, choose New > Virtual Machine..., and create an CVM VM.

      Table 7-6  CVM specifications
      Item Specifications
      Operating system (OS) SUSE Linux Enterprise Server 11 SP3 (64-bit)
      CPUs

      Number of CPUs for a CVM in separated deployment mode: 4

      Number of CPUs for a CVM in converged deployment mode: 8

      Memory (GB)
      The formula for calculating memory usage is as follows: Memory occupied by each server = Memory occupied by the operating system + Memory occupied by the VBS process.
      • For details about the memory occupied by the operating system, see the memory requirements in product documentation for corresponding products.
      • For details about the memory occupied by the VBS process, see the memory capacity of compute nodes in Memory in System Requirements.
      NIC (Network adapter)
      • First network adapter: Select the management network plane.
      • Second network adapter: Select the storage network plane.
      • Third network adapter: Select the iSCSI network plane.
      Disk (GB)

      One 100 GB disk that uses local storage on the host

      If other options are required, apply the default specifications.

      The VM must be stopped after it is created, because other VM parameters need configuration.

    2. If multiple computing nodes are deployed, create a CVM on each node.

    Configure the CVM.

    1. Right-click the CVM VM and select Settings... to configure CVM VM parameters.

      Configure the CPU according to the CVM VM specifications in Processor as shown in Figure 7-4.

      Figure 7-4  CPU configuration

    2. Set the CVM parameters for all the other CVMs. For details, see 3.

    Install an OS.

    1. See Installing an OS on FSM to mount the operating system image file. See Installing an OS on Storage Nodes to install the operating system based on the site requirements.

      Ensure that the space in the following directories available after the OS is installed meets the running requirements of FusionStorage Block software.

      Directory Available space Description
      /opt 1 GB The required directory space must be provided for FusionStorage Block software to run properly.
      /tmp 1 GB The available space must be greater than 1 GB before installing the software. After the software is installed, you can clear the directory.
      /var/log 2 GB You are advised to create an independent partition for storing logs, avoiding adverse impacts caused by a large number of logs on OS running.

      Table 7-7 lists the minimum space requirements for system partitions. If your system disk space is sufficient, you are advised to properly expand the size of each partition based on the actual condition.

      Table 7-7  Server OS partition
      Mount Point

      Description

      Minimum Partition Size (GB) Recommended Partition Size (GB)
      / System root directory 20 20
      swap Swap partition 20 20
      /usr System program directory 5 20
      /opt Third-party software directory 5 40
      /tmp Directory for saving temporary files generated by users or during program running 5 40
      /var Directory for saving changed data during system running 5 5
      /var/log Log partition
      NOTICE:

      Do not save other files in this partition. Otherwise, log space will be occupied, resulting in failure to save new logs and affecting fault location.

      5 60

    Configure network information.

    NOTE:
    The following operations describe how to configure network information for VMs running the SUSE Linux OS.

    1. Right-click the CVM VM and select Connet.... and log in to the CVM as user root.
    2. Run the following command to enter the path to the NIC configuration files:

      cd /etc/sysconfig/network

    3. Run the following command to query the NIC configuration files in the path and then make a note of the file names:

      ll

      Information similar to the following is displayed:

      ifcfg-eth0

      ifcfg-eth1

    4. Modify the NIC configuration files based on the network planes configured for the CVM network adapters.

      For example, if the management plane uses the first network adapter, the NIC configuration file name is ifcfg-eth0. If the storage plane uses the second network adapter, the NIC configuration file name is ifcfg-eth1. If the iSCSI plane uses the third network adapter, the NIC configuration file name is ifcfg-eth2.

      NOTE:
      In the /etc/sysconfig/network directory, the third network adapter (the ifcfg-eth2 file in the preceding example) may not exist by default. Therefore, you are advised to create the file and configure the network information.
      Examples of the NIC configuration files on each network plane are as follows:
      • For the management plane:
        BOOTPROTO="static" DEVICE="eth0" IPADDR="192.168.40.30" NETMASK="255.255.255.0" STARTMODE="onboot"
      • For the storage plane:
        BOOTPROTO="static" DEVICE="eth1" IPADDR="192.168.70.30" NETMASK="255.255.255.0" STARTMODE="onboot"
      • For the iSCSI plane:
        BOOTPROTO="static" DEVICE="eth2" IPADDR="192.168.80.30" NETMASK="255.255.255.0" STARTMODE="onboot"
      NOTE:
      The preceding IP addresses are examples only. Set IP addresses based on the data plan.

    5. Run the following command to edit the network configuration file:

      vi /etc/sysconfig/network/routes

      NOTE:
      If the in-use OS does not have the routes file, the routes file will be created after you run the command. Continue with the follow-up operations.

    6. Add the default gateway information of the management plane to the configuration file.

      For example, run the following command:

      default 192.168.40.1 - -

    7. Save the configuration and exit the vi editor.
    8. Run the following command to restart the network service to make the configuration take effect:

      service network restart

    9. Configure the network information on all the other CVM nodes. For details, see 6 to 13.

    Installing OS Dependency Packages

    You can install the FusionStorage Agent (FSA) node only after installing the required OS dependency packages on the server.

    1. After the OS is installed, configure firewall rules for the OS by performing operations provided in FusionStorage Communication Matrix.

      Log in to the server OS using the server management IP address. Perform the following operations to disable firewalls and disable the function of enabling firewalls upon a startup. If firewalls are not disabled, communication between storage nodes fails.

      Commands for disabling firewalls are as follows:

      • OSs earlier than Red Hat Linux 6.9 or CentOS Linux 6.9:

        Stopping the firewall program: service iptables stop

        Disabling automatic firewall startup upon OS startup: chkconfig iptables off

      • Red Hat Linux 7.1~7.5, CentOS Linux 7.1~7.5, Oracle Linux or EluerOS

        Stopping the firewall program: systemctl stop firewalld

        Disabling automatic firewall startup upon OS startup: systemctl disable firewalld

      • SUSE Linux OSs:

        Stopping the firewall program:

        /etc/init.d/SuSEfirewall2_setup stop

        /etc/init.d/SuSEfirewall2_init stop

        Disabling automatic firewall startup upon OS startup:

        chkconfig SuSEfirewall2_setup off

        chkconfig SuSEfirewall2_init off

      For detailed operation guidelines, visit the official website of the OS provider.

    2. Decompress the FusionStorage Block V100R006C20SPC200_tool.zip software package.

      Obtain one of the following installation scripts for the OS dependency packages from the install_lib folder based on the OS type:

      • Red Hat Linux OSs: install_lib_for_rhel.sh
      • SUSE Linux OSs: install_lib_for_suse.sh
      • CentOS Linux OSs: install_lib_for_centos.sh
      • Oracle Linux OSs: install_lib_for_oel.sh

    3. (Optional) Modify the installation script of OS dependency packages to adapt with the installation.

      Perform this step if you have already configured the software repository onsite and do not need to obtain the dependency package software by mounting the OS image file.

      Modify the installation script as follows:

      • SUSE Linux OSs: Edit the install_lib_for_suse.sh file as follows:

        • Change the REPO_NAME value to the name of the in-use software repository. You can query the software repository name by running the zypper lr command in the OS.
        • Change the ZYPPER_REPO_IS_EXIST value to 1.
      • Other OSs: Change the YUM_REPO_IS_EXIST value in the install_lib_for_XXX.sh file to 1.

    4. Upload the obtained installation script to a directory, for example, /tmp, on each server.
    5. On the remote control page, mount the OS image file.

      This step is not required if you have configured the software repository onsite.

    6. Use PuTTY to log in to the first server.

      Ensure that the management IP address and username root are used to establish the connection.

      If the public and private keys are used to authenticate the login, perform the operations based on Using PuTTY to Log In to a Node in Key Pair Authentication Mode.

    7. Switch to the directory containing the installation script and run the following command to install the dependency packages:

      sh install_lib_for_xxx.sh

      In this command, install_lib_for_xxx.sh specifies the installation script name.

      For example, sh install_lib_for_suse.sh.

      The installation process takes about 20 minutes. You can install the dependency packages for multiple servers at the same time. The time required for the installation varies depending on the hardware configurations and whether the software repository is used. If the installation is completed in a short period of time and no error message is reported, the installation is successful.

      NOTE:

      If the server does not support the dependency package installation by mounting the image file, perform the following operations to install the dependency packages:

      1. Upload the local operating system image file to the server.
      2. Run the sh install_lib_for_xxx.sh Name of the image file command to install the dependency packages.

        In the preceding command, install_lib_for_xxx.sh specifies the name of the installation script.

        Example command: sh install_lib_for_rhel.sh /tmp/CentOS-7-x86_64-DVD-1611.iso

      3. After the dependency packages are installed, the servers are automatically restarted.

    8. Install the dependency packages for all servers. For details, see 20 to 21.

    Set VMs to automatically start.

    1. Set the management VMs to automatically start with the host.

      Management VMs include:

      • FSM VMs
      • CVMs

      The CVM must be ranked first among all automatic startup VMs to improve the storage service reliability.

      Right-click the VM to be configured, and select Settings... to enable the VM to start automatically when the host starts, as shown in Figure 7-5.

      Figure 7-5  VM automatic startup

Translation
Download
Updated: 2019-06-29

Document ID: EDOC1100016637

Views: 23734

Downloads: 14

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next