No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionStorage 8.0.0 Software Installation Guide 02

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Installing a CVM

Installing a CVM

Scenarios

  • Perform this operation only in the scenario of FusionStorage interconnecting with VMware.
    • Storage nodes and compute nodes are deployed together.
    • VBS are deployed on CVMs of compute nodes.
  • This section describes how to create a VM on an ESXi node as a CVM node, install the operating system, configure the networks (storage network and management network), and set the startup mode for the CVM node. You can install the FusionStorage software on DeviceManager Client later.

Procedure

  1. Configure virtual switches in ESXi. For details about how to configure virtual switches, see the official website. This section describes the operation sequence and requirements.

    1. Add a 10GE virtual switch.
    2. Create a storage network port group and an iSCSI network port group on the 10GE virtual switch.
      NOTE:

      By default, the system has the VM Network port group and Management Network port.

    3. Create an iSCSI port on the 10GE virtual switch and configure a service IP address for the iSCSI port to communicate with the current ESXi node.

  2. For details about partition requirements and how to create a CVM, see Preparing Management VMs. Table 2-21 describes the configuration requirements.

    Table 2-21 CVM specifications

    Name

    Specifications

    Operating system

    See Compatibility.

    CPU

    See CPU.

    Memory

    See Memory.

    Network

    • NIC 1: Select the default management network port group.
    • NIC 2: Select the storage network port group created in 1.
    • NIC 3: Select the iSCSI network port group created in 1.

    Disk

    one 100 GB disk, used for local storage

    Other parameters

    Retain the default specifications.

  3. Configure the CVM network.

    1. Log in to CVM as user root.
    2. Run the ifconfig command to check the NIC name and the network where the NIC belongs.

      For example, if the MAC address of eth0 matches that of NIC 1, the network is a management network.

    3. Run the following command to access the directory where the NIC configuration file resides:
      • Red Hat, Oracle, CentOS, and EulerOS: cd /etc/sysconfig/network-scripts
      • SUSE: cd /etc/sysconfig/network
    4. Run the ll command to view and record the names of all NIC configuration files.
      NOTE:

      If the network port configuration file does not exist in the current path, run the vi Configuration file name command to create one.

    5. Edit the configuration file, as shown in Table 2-22.
      NOTE:

      In the following example, the management network port name is eth0, and the configuration file is ifcfg-eth0. Replace them based on the site requirements.

      Table 2-22 Content of the configuration file

      Network

      SUSE

      Red Hat, EulerOS, and CentOS

      Management network

      BOOTPROTO="static" 
      DEVICE="eth0" 
      IPADDR="Management network IP address"
      NETMASK="Management network subnet mask"
      STARTMODE="onboot"
      TYPE=Ethernet
      BOOTPROTO=static
      NAME=eth0
      DEVICE=eth0
      IPADDR=Management network IP address
      NETMASK=Management network subnet mask
      GATWAY=Management network gateway
      ONBOOT=yes

      Storage network

      BOOTPROTO="static" 
      DEVICE = eth1
      IPADDR="Storage network IP address"
      NETMASK="Storage network subnet mask"
      STARTMODE="onboot"
      TYPE=Ethernet
      BOOTPROTO=static
      NAME=eth1
      DEVICE=eth1
      IPADDR=Storage network IP address
      NETMASK=Storage network subnet mask
      ONBOOT=yes

      iSCSI network

      BOOTPROTO="static" 
      DEVICE="eth2" 
      IPADDR="iSCSI network IP address"
      NETMASK="iSCSI network subnet mask"
      STARTMODE="onboot"
      TYPE=Ethernet
      BOOTPROTO=static
      NAME=eth2
      DEVICE=eth2
      IPADDR=iSCSI network IP address
      NETMASK=iSCSI network subnet mask
      ONBOOT=yes
    6. For the SUSE operating system, run the vi /etc/sysconfig/network/routes command, press i to enter the editing mode, and enter default Management network gateway. Press Esc, and enter :wq to save and close the file.
    7. Run the service network restart command to restart network services to make the configuration take effect.

  4. Install the operating system dependency package for CVM by referring to 3.
  5. Create CVMs for other servers.

    1. Clone the current CVM to other servers. For details about how to clone the CVM, see the official website document.
    2. Configure the network for CVM by referring to 3.

  6. Pass through related hardware to the CVM if the CVM also needs to function as a storage node.

    NOTE:
    • The RAID controller card and SSD card are passed through to the CVM to provide main storage and cache for storage nodes. However, the RAID controller card of the disk where the server operating system resides cannot be passed through to CVM. Otherwise, the operating system needs to be reinstalled.
    • If the RoCE or IB network is used, you need to pass through the RoCE NIC or IB NIC to the CVM.
    1. Power off the CVM.
    2. Select an ESXi host and choose Hardware > PCI Device.
    3. Select the devices to be passed through, and click OK to add the devices to the page.
    4. Select the CVM and choose Configuration > Edit. In the lower part of the page, select the devices in New Device and click Add.
      • Select PCI Device for the RAID controller card and SSD card.
      • Select Network for the RoCE NIC and IB NIC.
    5. Power on the CVM.

  7. Set the startup sequence of the CVM node.

    1. Log in to vCenter.
    2. Select an ESXi host and choose Configure > Virtual Machines > VM Startup/Shutdown.
    3. Click Edit. In the dialog box that is displayed, select Automatically start and stop the virtual machines with the system.
    4. Select a VM and click the up or down arrow key to move the VM to the Automatic Startup area. Set Startup Delay to 0.
    NOTE:

    The CVM must be placed in the first place of all automatically started VMs to improve storage service reliability.

Translation
Download
Updated: 2019-07-12

Document ID: EDOC1100081424

Views: 1755

Downloads: 7

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next