No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Basic Storage Service Guide for File 15

OceanStor 5300 V3, 5500 V3, 5600 V3, 5800 V3, and 6800 V3 Storage System V300R003

"This document describes the basic storage services and explains how to configure and managebasic storage services."
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Configuring an NFS Share

Configuring an NFS Share

The OceanStor 5300 V3/5500 V3/5600 V3/5800 V3/6800 V3 supports the NFS share mode. After configuring an NFS share, you can set different access permissions for clients.

Planning an NFS Share

Planning an NFS share helps facilitate the follow-up service configuration. The following items need to be planned: networks, domains, permissions, and clients.

Table 3-13 lists the required preparation items.
Table 3-13  NFS share planning

Planned Item

Subitem

Requirement

Example

Network

IP address of the storage system

The storage system uses logical port (LIFa) to provide shared space for a client.

172.16.128.10

IP address of the access client.

The access client and storage system are accessible, and they can ping each other.

192.168.0.10

IP address of the maintenance terminal

The maintenance terminal and storage system are accessible, and they can ping each other.

192.168.128.10

NIS or LDAP domain

In a NIS or LDAP domain, collect the domain server's IP address and domain information and ensure that the domain server and storage system reside on the same network and they can ping each other.

LDAP server

172.16.128.15

Domain

Non-domain, NIS domain, or LDAP domain

Configure a non-domain environment, NIS domain, or LDAP domain based on site requirements. Generally, configure a domain environment for a large-sized enterprise or an enterprise that requires high security.
NOTE:

When adding a storage system to a domain, you must connect dual controllers of the storage system to the domain controller.

LDAP

Permission

-

Set users' permissions for accessing a file system.
  • When NFS v3 is used, the storage system supports UGO permissions but not ACL permissions. UGO permissions include Execute, Read, and Write.
    NOTE:

    You can run command admin:/>change service nfs support_v3_enabled=on v3_acl_enabled=on and remount the file system of the NFS share to enable ACL permissions (Not applicable to V300R003C00/V300R003C10).

  • When NFS v4 is used, the storage system supports both UGO permissions and ACL permissions. ACL permissions include List Directories, Read Data, and Write Data.

Read-only

a: A LIF is a logical port created on the physical port, bond port, and VLAN. Each LIF corresponds to an IP address.

NOTE:

In scenarios where a firewall is deployed, ensure that the RPCBIND service on the client is properly running to provide RPC port mapping service. In addition, ensure that firewall rules allow the storage system to initiate connection requests and send messages to the client.

Configuration Process

This section describes the NFS share configuration process.

Figure 3-2 shows the NFS share configuration process.
Figure 3-2  NFS share configuration process

Preparing Data

Before configuring an NFS share in a storage system, plan and collect required data to assist in the follow-up service configuration.

Table 3-14 describes data to be planned and collected.

Table 3-14  Preparations required for configuring a NFS share

Item

Description

Example

Logical IP address

Indicates a logical IP address used by a storage system to provide shared space for a client.

-

172.16.128.10

File system

File system used to create an NFS share.

The OceanStor 5300 V3/5500 V3/5600 V3/5800 V3/6800 V3 allows you to configure a file system or its sub-directory as an NFS share.

FileSystem001

LDAP domain or NIS domain information

LDAP domain information includes:
  • IP address of the primary LDAP server
  • Optional: IP address of the standby LDAP server
  • Port number used by the LDAP protocol
  • Type of the encryption protocol
  • Password hash algorithm
  • Base distinguished name (DN)
  • Bonded DN
NIS domain information includes:
  • Domain name
  • IP address of the active server in the NIS domain
  • Optional: IP address of the standby server in the NIS domain

LDAP

Permission

NFS share permissions of a client.
The permissions include read-only and read-write.
  • Read-only: Clients have the read-only permission for NFS shares.
  • Read-write: Clients have the read-write permission for NFS shares.

Read-only

NOTE:

You can contact your network administrator to obtain desired data.

Checking the License File

Each value-added feature requires a license file for activation. Before configuring a value-added feature, ensure that its license file is valid for the feature.

Procedure

  1. Log in to DeviceManager.
  2. Choose Settings > License Management.
  3. Check the active license files.
    1. In the navigation tree on the left, choose Active License.
    2. In the middle information pane, verify the information about active license files.

Follow-up Procedure

If the storage system generates an alarm indicating that the license expired, obtain and import the license again.

Configuring a Network

This section describes how to use DeviceManager to configure IP addresses for a storage system.

Procedure

  1. Log in to DeviceManager and choose Provisioning > Port.

    The Port page is displayed.

  2. Optional: Create a bond port.

    Port bonding provides more bandwidth and redundancy for links. After Ethernet ports are bonded, MTU changes to the default value and you must set the link aggregation mode for the ports. For example, on Huawei switches, you must set the ports to the static LACP mode.

    NOTE:
    • The port bond mode of a storage system has the following restrictions:
      • On the same controller, a bond port is formed by a maximum of eight Ethernet ports.
      • Only the interface modules with the same port rate (GE or 10GE) can be bonded.
      • The port cannot be bonded across controllers. Non-Ethernet network ports cannot be bonded.
      • SmartIO cards cannot be bonded if they work in cluster or FC mode or run FCoE service in FCoE/iSCSI mode.
      • The MTU value of the SmartIO port must be the same as that of the host.
      • Read-only users are unable to bind Ethernet ports.
      • Each port only allows to be added to one bonded port. It cannot be added to multiple bonded ports.
      • Physical ports are bonded to create a bond port that cannot be added to the port group.
    • Although ports are bonded, each host still transmits data through a single port and the total bandwidth can be increased only when there are multiple hosts. Determine whether to bond ports based on site requirements.
    • The detailed link aggregation mode varies with the switches' manufacturer.

    1. In Ethernet Ports, select a Ethernet port and click More > Bond Port.

      The Bond Port dialog box is displayed.

    2. Enter bond port information. Table 3-15 describes related parameters.

      Table 3-15  Bond port parameters

      Parameter

      Description

      Value

      Bond Name

      Name of the bond port.

      [Example]

      bond01

      Available Ports

      Available ports for binding.

      [Example]

      CTE0.A.IOM1.P0

    3. Click OK.

      The Danger dialog box is displayed.

    4. Select I have read and understood the consequences associated with performing this operation. And click OK.
  3. Create a logical port.

    NOTE:
    A maximum of 64 logical ports can be created for one controller. If more than 64 logical ports are created for one controller, logical ports drift towards a few available ports when a large number of ports fail, which deteriorates service performance.

    1. Select Logical Ports and click Create.

      The Create Logical Port dialog box is displayed.

    2. Enter logical port information. Table 3-16 describes related parameters.

      Table 3-16  Create Logical Port parameters

      Parameter

      Description

      Value

      Name

      Name of the logical port.

      [Example]

      logip

      IP Address Type

      Type of the IP address: IPv4 Address or IPv6 Address.

      [Example]

      IPv4 Address

      IPv4 Address (IPv6 Address)

      IP address of the logical port.

      [Example]

      172.16.128.10

      Subnet Mask (Prefix)

      Subnet mask (Prefix) of the logical port.

      [Example]

      255.255.255.0

      IPv4 Gateway (IPv6 Gateway)

      Address of the gateway.

      [Example]

      172.16.128.1

      Primary Port

      Physical port preferred by the logical port.

      [Example]

      CTE0.A.IOM0.P0

      IP Address Floating

      Whether IP address floating is enabled.

      OceanStor 5300 V3/5500 V3/5600 V3/5800 V3/6800 V3 support IP address floating. When the primary port is disabled, the IP address will be floated to another port that can be used. For details, see OceanStor 5300 V3&5500 V3&5600 V3&5800 V3&6800 V3 Storage System V300R003 IP Address Floating Deployment Guide.
      NOTE:

      Shares of file systems do not support the multipathing mode. IP address floating is used to improve reliability of links.

      [Example]

      Enable

      Failback Mode

      Failback mode of the IP address: Automatic and Manual.
      NOTE:
      • If Failback Mode is Manual, ensure that the link to the primary port is normal before the failback. Services will manually fail back to the primary port only when the link to the primary port keeps normal for over five minutes.
      • If Failback Mode is Automatic, ensure that the link to the primary port is normal before the failback. Services will auto fail back to the primary port only when the link to the primary port keeps normal for over five minutes.

      [Example]

      Automatic

      Activate Now

      Whether the logical port is activated immediately. After activated, the logical IP can be used to access the shared space.

      [Example]

      Enable

    3. Click OK.

      The Success dialog box is displayed.

    4. Click OK.
  4. Optional: Managing a Route.

    You need to configure a route when the NFS server and the storage system are not on the same network.

    • When a domain controller server exists, ensure that the logical IP addresses and domain controller server can ping each other. If they cannot ping each other, add a route from the logical IP addresses to the network segment of the domain controller server.
    • When configuring NFS share access, if the NFS server and logical IP addresses cannot ping each other, add a route from the logical IP addresses to the network segment route of the NFS server.

    1. Select the locical port for which you want to add a route and click Route Management.

      The Route Management dialog box is displayed.

    2. Configure the route information for the logical port.

      1. In IP Address, select the IP address of the logical port.
      2. Click Add.
        The Add Route dialog box is displayed.

        The default IP addresses of the internal heartbeat on the dual-controller storage system are 127.127.127.10 and 127.127.127.11, and the default IP addresses of the internal heartbeat on the four-controller storage system are 127.127.127.10, 127.127.127.11, 127.127.127.12, and 127.127.127.13. Therefore, the IP address of the router cannot fall within the 127.127.127.XXX segment. Besides, the IP address of the gateway cannot be 127.127.127.10, 127.127.127.11, 127.127.127.12, or 127.127.127.13. Otherwise, routing will fail. (Internal heartbeat links are established between controllers for these controllers to detect each other's working status. You do not need to separately connect cables. In addition, internal heartbeat IP addresses have been assigned before delivery, and you cannot change these IP addresses).

      3. In Type, select the type of the route to be added.
        There are three route options:
        • Default route

          Data is forwarded through this route by default if no preferred route is available. The target address field and the target mask field (IPv4) or prefix (IPv6) of the default route are automatically set to 0. To use this option, you only need to add a gateway.

        • Host route

          The host route is the route to an individual host. The destination mask (IPv4) or prefix (IPv6) of the host route are automatically set respectively to 255.255.255.255 or 128. To use this option, you only need to add the target address and a gateway.

        • Network segment route

          The network segment route is the route to a network segment. You need to add the target address, target mask (IPv4) or prefix (IPv6), and gateway. Such as the target address is 172.17.0.0, target mask is 255.255.0.0, and gateway is 172.16.0.1.

      4. Set Destination Address.
        • If IP Address is an IPv4 address, set Destination Address to the IPv4 address or network segment of the application server's service network port or that of the other storage system's logical port.
        • If IP Address is an IPv6 address, set Destination Address to the IPv6 address or network segment of the application server's service network port or that of the other storage system's logical port.
      5. Set Destination Mask (IPv4) or Prefix (IPv6).
        • If a Destination Mask is set for an IPv4 address, this parameter specifies the subnet mask of the IP address for the service network port on the application server or storage device.
        • If a Prefix is set for an IPv6 address, this parameter specifies the prefix of the IPv6 address for application server's service network port or that of the other storage system's logical port.
      6. In Gateway, enter the gateway of the local storage system's logical port IP address.

    3. Click OK. The route information is added to the route list.

      The Danger dialog box is displayed.

    4. Confirm the information of the dialog box and select I have read and understood the consequences associated with performing this operation..
    5. Click OK.

      The Success dialog box is displayed indicating that the operation succeeded.

      NOTE:

      To remove a route, select it and click Remove.

    6. Click Close.

Enabling the NFS Service

Before configuring an NFS share, enable the NFS service for clients to access the NFS share. The storage system supports NFSv3 and NFSv4.

Prerequisites

The license for NFS protocol has been imported and activated.

Context

The system supports NFSv3 and NFSv4.

Procedure

  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > File Storage Service > NFS Service.
  3. Enable the NFS service according to the protocol version used when the host mounts NFS share.



    • If the host needs to use NFSv3 to mount shares, select Enable NFSv3.
    • If host uses NFSv4 to mount share, execute the following steps.
      1. Click Advanced and select Enable NFSv4.
      2. After NFSv4 has been enabled, enter the storage domain name in Domain Name.
      NOTE:
      • NFSv4 adopts the user+domain name mapping mechanism, enhancing the security of clients' access to shared resources. It is recommended that host use this version to mount share.
      • In non-domain or LDAP environment, enter the default domain name localdomain.
      • In an NIS environment, the entered information must be consistent with domain in the /etc/idmapd.conf file on the Linux client that accesses shares. It is recommended that both the two be the domain name of the NIS domain.
      • The domain name must be no longer than 64 characters.
      • To disable NFS service, do not select Enable NFSv3/NFSv4.

  4. Click Save.

    The Success dialog box is displayed indicating that the operation succeeded.

  5. Click OK.

(Optional) Configuring a Storage System to Add It to an LDAP Domain

This section describes how to add the storage system to an LDAP domain.

Configuration Process

This section introduces the process of configuring an LDAP user or user group.

Figure 3-3 shows the process of configuring the LDAP domain authentication.
Figure 3-3  Process of configuring a storage system to add it to an LDAP domain

Preparing Configuration Data of the LDAP Domain

Collect the configuration data of an LDAP domain server in advance to add storage systems to the LDAP domain.

LDAP Domain Parameters

LDAP data is organized in a tree structure that clearly lays out organizational information. A node on this tree is called as Entry. Each Entry has a distinguished name (DN). The DN of an Entry is composed of the Base DN and RDN. The Base DN refers to the position of the parent node where the Entry resides on the tree, and the RDN refers to an attribute that distinguishes the Entry from others such as UID or CN.

LDAP directories function as file system directories. For example, directory dc=redmond,dc=wa,dc=microsoft,dc=com can be regarded as the following path of a file system directory: com\microsoft\wa\redmond. In another example of directory cn=user1,ou=user,dc=example,dc=com, cn=user1 indicates a username and ou=user indicates the organization unit of an Active Directory (AD), that is, user1 is in the user organization unit of the example.com domain.

The following figure shows data structure of an LDAP server:

Table 3-17 describes meanings of LDAP entry acronyms.
Table 3-17  Meanings of LDAP entry acronyms

Acronym

Meaning

o

Organization

ou

Organization Unit

c

Country Name

dc

Domain Component

sn

Surname

cn

Common Name

What Is OpenLDAP?

OpenLDAP is a free and open implementation of LDAP that is now widely used in various popular Linux releases. OpenLDAP requires licenses.

OpenLDAP mainly consists of the following four components:
  • slapd: an independent LDAP daemon
  • slurpd: an independent LDAP update and replication daemon
  • Library implementing LDAP
  • Tool software and illustration client
The OpenLDAP installation package can be found here on the Userbooster website: http://www.userbooster.de/en/download/openldap-for-windows.aspx.
NOTE:
The OpenLDAP installation package is not provided on the OpenLDAP website. The installation package supports the following Windows operating systems: Windows XP, Windows Server 2003, Windows Server 2008, Windows Vista, Windows 7, Windows 8, and Windows Server 2012.
Obtaining LDAP Configuration Data in Windows
Using OpenLDAP as an example, the following steps describe how to obtain LDAP configuration data.
NOTE:

For V300R003, in Windows operating systems, you can only obtain the LDAP configuration data by installing OpenLDAP and the LDAP service provided by the AD domain is not supported.

  1. Open the OpenLDAP installation directory.
  2. Find the slapd.conf system configuration file.
  3. Use the text editing software to open the configuration file and search for the following fields:
    suffix   "dc=example,dc=com"
    rootdn  "cn=Manager,dc=example,dc=com"
    
    rootpw    XXXXXXXXXXXX
    
    • dc=example,dc=com corresponds to Base DN on the storage system configuration page.
    • cn=Manager,dc=example,dc=com corresponds to Bind DN on the storage system configuration page.
    • XXXXXXXXXXXX corresponds to Bind Password on the storage system configuration page. If the password is the ciphertext, contact LDAP server administrators to obtain the password.
  4. Find configuration files (with .ldif as the file name extension) of users and user groups that need to access storage systems.
    NOTE:

    LDAP Interchange Format (LDIF) is one of the most common file formats for LDAP applications. It is a standard mechanism that represents directories in the text format, and it allows users to import data to and export data from the directory server. LDIF files store LDAP configurations and directory contents, and you can obtain parameter information from LDIF files.

  5. Use text editing software to open the configuration file and find the DNs of a user and a user group that correspond to User Directory and Group Directory respectively on the storage system configuration page.
    #root on the top
    dn: dc=example,dc=com
    dc: example
    objectClass: domain
    objectClass: top
    #First organization unit name: user
    dn: ou=user,dc=example,dc=com
    ou: user
    objectClass: organizationalUnit
    objectClass: top
    #Second organization unit name: groups
    dn: ou=group,dc=example,dc=com
    ou: groups
    objectClass: organizationalUnit
    objectClass: top
    #The first user represents user1 that belongs to organization unit user in the organizational structure topology.
    dn: cn=user1,ou=user,dc=example,dc=com
    cn: user1
    objectClass: posixAccount
    objectClass: shadowAccount
    objectClass: inetOrgPerson
    sn: user1
    uid: user1
    uidNumber: 2882
    gidNumber: 888
    homeDirectory: /export/home/ldapuser
    loginShell: /bin/bash
    userPassword: {ssha}eoWxtWNl8YbqsulnwFwKMw90Cx5BSU9DRA==xxxxxx
    #The second user represents user2 that belongs to organization unit user in the organizational structure topology.
    dn: cn=user2,ou=user,dc=example,dc=com
    cn: user2
    objectClass: posixAccount
    objectClass: shadowAccount
    objectClass: inetOrgPerson
    sn: client
    uid: client
    uidNumber: 2883
    gidNumber: 888
    homeDirectory: /export/home/client
    loginShell: /bin/bash
    userPassword: {ssha}eoWxtWNl8YbqsulnwFwKMw90Cx5BSU9DRA==xxxxxx
    #The first user group represents group1 that belongs to organization unit group in the organizational structure topology. The group contains user1 and user2.
    dn: cn=group1,ou=group,dc=example,dc=com
    cn: group1
    gidNumber: 888
    memberUid: user1#Belongs to the group.
    memberUid: user2#Belongs to the group.
    objectClass: posixGroup
    
Obtaining LDAP Configuration Data in Linux

Using OpenLDAP as an example, the following steps describe how to obtain LDAP configuration data.

  1. Log in to an LDAP server as user root.
  2. Run the cd /etc/openldap command to go to the /etc/openldap directory.
    linux-ldap:~ # cd /etc/openldap
    linux-ldap:/etc/openldap #
  3. Run the ls command to view system configuration file slapd.conf and the configuration file (with .ldif as the file name extensions the file name extension) of users and user groups who want to access storage systems.
    linux-ldap:/etc/openldap #ls
    example.ldif ldap.conf schema slap.conf slap.con slapd.conf
  4. Run the cat command to open system configuration file slapd.conf where you can view related parameters.
    linux-ldap:/etc/openldap #cat slapd.conf
    
    suffix   "dc=example,dc=com"
    rootdn  "cn=Manager,dc=example,dc=com"
    
    rootpw    XXXXXXXXXXXX
    
    • dc=example,dc=com corresponds to Base DN on the storage system configuration page.
    • cn=Manager,dc=example,dc=com corresponds to Bind DN on the storage system configuration page.
    • XXXXXXXXXXXX corresponds to Bind Password on the storage system configuration page. If the password is in cipher text, contact LDAP server administrators to obtain the password.
  5. Run the cat command to open the example.ldif file. Find the DNs of a user and a user group that correspond to User Directory and Group Directory respectively on the storage system configuration page. For details about description of parameters, see Example of LDIF Files in Windows.
Configuring LDAP Domain Authentication Parameters

If an LDAP domain server is deployed on the customers' network, add the system to the LDAP domain. After the system is added to the LDAP domain, the LDAP domain server can authenticate NFS clients when they attempt to access the system share resources.

Prerequisites

  • An LDAP domain has been set up.
  • Associated configurations have been completed, and required data is ready.
NOTE:
  • Storage system can be connected to the LDAP server through the management port or the service port (logical port). When using the management port or the service port to connect to the LDAP server, it requires all the controllers can communicate with the LDAP server. You are advised to use the service port to connect to the LDAP server.
  • Storage system can be connected to only one LDAP server.
Precautions

You are advised to use physical isolation and end-to-end encryption to ensure security of data transfer between clients and LDAP domain servers.

You are advised to configure a static IP address for the Lightweight Directory Access Protocol (LDAP) server. If a dynamic IP address is configured, security risks may exist.

Procedure

  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > File Storage Service > Domain Authentication.
  3. In the LDAP Domain Settings area, configure the LDAP domain authentication parameters. The related parameters are shown in Table 3-18 below.



    Table 3-18  Parameters of the LDAP domain

    Parameter

    Description

    Value

    Primary IP Address

    IP address of an LDAP domain server.

    NOTE:
    Ensure that the IP address is reachable. Otherwise, user authentication commands and network commands will time out.

    [Example]

    192.168.0.100

    Standby IP Address 1

    IP address of standby LDAP server 1.

    NOTE:
    Ensure that the IP address is reachable. Otherwise, user authentication commands and network commands will time out.

    [Example]

    192.168.0.101

    Standby IP Address 2

    IP address of standby LDAP server 2.

    NOTE:
    Ensure that the IP address is reachable. Otherwise, user authentication commands and network commands will time out.

    [Example]

    192.168.0.102

    Port

    Port used by the system to communicate with the LDAP domain server.

    The default port number of the LDAP server is 389, and the default port number of the LDAPS server is 636.

    [Value Range]

    A valid port ranges from 1 to 65535.

    [Example]

    636

    Protocol

    Protocol used by the system to communicate with the LDAP domain server.

    • LDAP: indicates that the system uses the standard LDAP protocol to communicate with the LDAP domain server.
    • LDAPS: indicates that the system uses the LDAP over SSL to communicate with the LDAP domain server if the LDAP domain server supports the SSL.
    NOTE:
    Before selecting the LDAPS protocol, import the CA certificate file for the LDAP domain server. If an LDAP server is required to authenticate the storage system, import the certificate file and private key file.

    [Example]

    LDAP

    Base DN

    DN that specifies LDAP for searching.

    [Rule]

    A DN consists of RDNs which are separated by commas (,). An RDN is in the format of key=value. The value cannot start with a pound (#) or a space and cannot end with a space.For example, testDn=testDn,xxxDn=xxx.

    [Format]

    xxx=yyy, separated by commas (,).

    [Example]

    dc=admin,dc=com

    Bind DN

    Name of a bond directory.
    NOTE:

    To access content, you must use the directory for searching.

    [Rule]

    A DN consists of RDNs which are separated by commas (,). An RDN is in the format of key=value. The value cannot start with a pound (#) or a space and cannot end with a space.For example, testDn=testDn,xxxDn=xxx.

    [Format]

    xxx=yyy, separated by commas (,).

    [Example]

    cn=ldapuser01,ou=user,dc=admin,dc=com

    Bind Password

    Password for accessing the bond directory.

    NOTE:

    Simple password may cause security risk. Complicated password is recommended, for example, password contains uppercases, lowercases, digits and special characters.

    [Example]

    !QAZ2wsx

    Confirm Bind Password

    Confirm password used by the system to log in to the LDAP domain server.

    [Example]

    !QAZ2wsx

    User Directory

    User DN configured by the LDAP domain server.

    [Example]

    ou=user,dc=admin,dc=com

    Group Directory

    User group DN configured by the LDAP domain server.

    [Example]

    ou=Groups,dc=admin,dc=com

    Search Timeout Duration (seconds)

    The timeout duration of client waiting for the search result from server. The default value is 3 seconds.

    [Example]

    3

    Connection Timeout Duration (seconds)

    The timeout duration of client connecting with server. The default value is 3 seconds.

    [Example]

    3

    Idle Timeout Duration (seconds)

    Duration after which the LDAP server and client have no communication with each other, the connection is down. The default value is 30 seconds.

    [Example]

    30

  4. Click Save. The LDAP domain authentication configuration is completed.

    NOTE:

    Click Restore to Initial to initialize the LDAP domain authentication.

(Optional) Generating and Exporting a Certificate on the Storage System

This section describes how to generate and export a certificate required for configuring domain authentication on the storage system.

Context

  • The certificate generated on the storage system is not signed and requires to be signed on the signature server.
  • If you use a third-party tool to export certificate request files, save the exported private key file as well. These files, together with the signed certificate and CA certificate, are imported to the storage system when the certificates are verified on the storage system.

Procedure

  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > Value-added Service Settings > Credential Management.
  3. Set Certificate Type to Domain authentication certificate and click Generate and Export.

    The Save As dialog box is displayed. Select a path to save the certificate and click Save.

Follow-up Procedure

After the domain authentication certificate is exported, sign the signature on it.
(Optional) Signing the Authentication Certificate and Exporting the CA Certificate

After a domain authentication certificate is exported, it takes effect only after it is signed by a third-party signing server. the CA certificate should be exported at the same time.

After the domain authentication certificate is exported, sign on the certificate based on actual conditions and export the CA certificate for follow-up procedures.

(Optional) Importing the Certificate and CA Certificate to the Storage System

This chapter introduces how to import the authentication certificate and CA certificate to the storage system to active the authentication certificate.

Prerequisites

  • The signed certificate and CA certificate already exist.
  • If the certificate file is exported and signed by a third-party tool, ensure that the private key file exists.

Context

If the certificate file is exported and signed by a third-party tool, import the private key file when you import and activate the certificate and CA certificate.

Procedure

  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > Value-added Service Settings > Credential Management.
  3. Import and activate the certificate.
    1. After the certificate has been signed by the server, click Import and Activate.

      The Import Certificate dialog box is displayed.

    2. Set Certificate Type to Domain authentication certificate and import the signed certificate and CA certificate. Table 3-19 lists the parameters and the explanations.

      Table 3-19  Certificate parameters

      Parameter

      Description

      Value

      Certificate Type

      Type of a certificate

      [Example]

      Domain authentication certificate

      Certificate File

      Certificate file that has been exported and signed.

      [Example]

      None

      CA Certificate File

      Certificate file of a server.

      [Example]

      None

      Private Key File

      Private key file of a device.

      [Example]

      None

    3. Click OK.
      The Warning dialog box is displayed.
    4. Carefully read the content of the dialog box, select I have read and understood the consequences associated with this operation, and click OK.
      The Success dialog box is displayed.
    5. Click OK.
      The certificate has been successfully imported and activated.

(Optional) Configuring a Storage System to Add It to a NIS Domain

This section describes how to add the storage system to an NIS domain.

Preparing Data of the NIS Domain Environment

Configuration data of NIS servers needs to be collected in advance to add storage systems to the NIS domain.

Why NIS Domains?

In the UNIX shared mode, all nodes that provide the sharing service need to maintain related configuration files such as /etc/hosts and /etc/passwd. As a result, great efforts are required to maintain these configuration files. For example, if you add a new node to the shared network, all UNIX-based systems need to update their /etc/hosts files to include the name of the new node. The new node may need to access all other nodes, so all the systems need to modify their /etc/passwd files. The above operations are time-consuming and tedious when the number of nodes are more than 10.

The network information service (NIS) developed by SUN Microsystem uses a single system (NIS server) to manage and maintain the files containing information about host names and user accounts, providing references for all the systems configured as NIS clients. When NIS is used, if you want to add a host to the shared network, you only need to modify a related file on the NIS server and transfer the modification to other nodes on the network.



shows the relationship between the NIS server and other hosts.

Working Principles

When NIS is configured, the ASCII files in the NIS domain are converted to NIS database files (or mapping table files). Hosts in the NIS domain query and parse the NIS database files to perform operations such as authorized access and updates. For example, common password file /etc/passwd of a UNIX host is converted to the following NIS database files:

Parameters

An NIS domain is a logical group of nodes that use the same NIS. A physical network includes multiple NIS domains and nodes with the same domain name belong to one NIS domain.

NIS domain–related files are saved in a subdirectory of /var/yp on the NIS server. The subdirectory name corresponds to the NIS domain name, for example, the files mapped to the research domain are saved in /var/yp/research.

The system super administrator can run the /usr/bin/domainname command to rename a domain in interactive mode. Common users can run the domainname command without parameters to obtain the default domain name of the local system.

Data Preparation Checklist
In order to add the storage system to NIS domain environment smoothly, for the data that needs to be used in the configuration process, please prepare in advance or plan according to the actual situation. Table 3-20 describes the data to be obtained before configuration.
Table 3-20  Data to be obtained

Item

How to Obtain/Example

Domain Name

Domain name of a server which contains 1 to 63 letters, digits, and hyphens (-), and cannot start or end with a hyphen (-). The domain names of different levels contains a maximum of 63 characters and must be separated by periods (.).

Please contact the administrator of the domain server.

[Example]

test.com

Primary Server Address

IP address or domain name of primary NIS domain server.

Please contact the administrator of the domain server.

[Example]

192.168.0.100

www.test.com

Standby Server Address 1 (Optional)

IP address or domain name of standby NIS server 1.

Please contact the administrator of the domain server.

[Example]

192.168.0.101

www.test.com

Standby Server Address 2 (Optional)

IP address or domain name of standby NIS server 2.

Please contact the administrator of the domain server.

[Example]

192.168.0.102

www.test.com

Configuring NIS Domain Authentication Parameters

If an NIS domain server is deployed on the customers' network, add the system to the NIS domain. After the system is added to the NIS domain, the NIS domain server can authenticate NFS clients when they attempt to access the system share resources.

Prerequisites

  • An NIS domain has been set up.
  • Associated configurations have been completed, and required data is ready.
NOTE:
  • The storage system can be connected to the NIS server through the management port or the service port (Ethernet port or logical port). When using the management port or the service port to connect to the NIS server, it requires all the controllers can communicate with the NIS server. You are advised to use the service port to connect to the NIS server.
  • The storage system can be connected to only one NIS server.
Precautions

To avoid security risks generated during data transmission between the client and NIS domain server, you are advised to use a highly secure authentication mode, such as LDAP over SSL (LDAPS) or AD domain+Kerberos authentication, or adopt physical isolation or end-to-end encryption.

Procedure

  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > File Storage Service > Domain Authentication.
  3. Select Enable to enable the NIS domain authentication.

    NOTE:

    NIS domain authentication does not support the transfer of encrypted data. Therefore, NIS domain authentication may cause security risks.

  4. In the NIS Domain Settings area, configure the NIS domain authentication parameters. The related parameters are shown in Table 3-21 below.



    Table 3-21  Parameters of the NIS domain

    Parameter

    Description

    Value

    Domain Name

    Domain name of a server.

    [Rule]

    Contains 1 to 63 letters, digits, and hyphens (-), and cannot start or end with a hyphen (-). The domain names of different levels contain a maximum of 63 characters and must be separated by periods (.).

    [Example]

    site

    Primary Server Address

    NIS domain server IP address.

    NOTE:
    Ensure that the IP address is reachable. Otherwise, user authentication commands and network commands will time out.

    [Example]

    192.168.0.100

    Standby Server Address 1

    IP address of standby NIS server 1.

    NOTE:
    Ensure that the IP address is reachable. Otherwise, user authentication commands and network commands will time out.

    [Example]

    192.168.0.101

    Standby Server Address 2

    IP address of standby NIS server 2.

    NOTE:
    Ensure that the IP address is reachable. Otherwise, user authentication commands and network commands will time out.

    [Example]

    192.168.0.102

  5. Click Save. The NIS domain authentication configuration is completed.

    NOTE:

    Click Restore to Initial to initialize the NIS domain authentication.

(Optional) Configuring the NFSv4 Service to Enable It to Be Used in a Non-Domain Environment

This section describes how to configure the NFSv4 service to enable it to be used in a non-domain environment.

Background

According to the NFSv4 standard protocol, the NFSv4 service must be used in a domain environment to ensure that the NFSv4 service functions properly. However, if you want to use the NFSv4 service in a non-domain environment, configure the user name@domain name mapping mechanism used by the NFSv4 service on your client. After the configuration is complete, the NFSv4 service will use UIDs and GIDs to transfer information about files during service transactions between your storage system and client.

Risks
  • In scenarios where the NFSv4 service is used in a non-domain environment, the user authentication method of the NFSv4 service is the same as that of the NFSv3 service. The method cannot meet the theoretical security requirements of the NFSv4 standard protocol.
  • Users mapped by each client depend on the configuration files of client users and user groups. Users of each client and the configuration file of each user group must be independently maintained for proper mapping.
  • UIDs and GIDs must be used when ACLs of non-root users and non-root user groups are configured. Otherwise, the configuration will fail.

You are advised not to use the NFSv4 service on a non-domain environment.

Configuration on the Client
  1. Run the echo 1 > /sys/module/nfs/parameters/nfs4_disable_idmapping command.
  2. Run the cat /sys/module/nfs/parameters/nfs4_disable_idmapping command. If Y is displayed in the command output, the configuration is successful.

    If you have used the NFSv4 service to mount NFS shares before configuring the NFSv4 service to enable compatibility between the service and a non-domain environment, mount the NFS shares again after configuring the NFSv4 service.

Creating an NFS Share

This section describes how to create an NFS share. After an NFS share is created, the applicable shared file system is accessible to clients that run the OS such as SUSE, Red Hat, HP-UNIX, Sun Solaris, IBM AIX, and Mac OS.

Prerequisites

  • Associated configurations have been completed, and required data is ready.
  • Logical port has been created.
  • The NFS service has been enabled.

Procedure

  1. Log in to DeviceManager.
  2. Choose Provisioning > Share > NFS (Linux/UNIX/MAC).
  3. Click Create.

    The Create NFS Share dialog box is displayed.

  4. Set NFS share path.

    Table 3-22 describes the related parameters.

    Table 3-22  Parameters for creating an NFS share

    Parameter

    Description

    Value

    File System

    File system for which you want to create an NFS share.

    [Example]

    FileSystem001

    Quota Tree

    Level-1 directory under the root directory of the file system.

    To share a quota tree, click and select a quota tree you want to share.

    [Example]

    share

    NOTE:

    The share path is /Filesystem001/share.

    Share Path

    Name used by a user for accessing the shared resources.

    -

    Description

    Description of the created NFS share.

    [Value range]

    Contains 0 to 255 characters.

    [Example]

    Share for user 1.

    Character Encoding

    Clients communicate with the storage system using codes. These codes apply to names and metadata of shared files, but do not change the codes of file data. Codes include:
    • UTF-8

      International code set

    • EUC-JP

      euc-j*[ ja ] code set

    • JIS

      JIS code set

    • S-JIS

      cp932*[ ja_jp.932 ] code set

    [Default value]

    UTF-8

  5. Click Next.

    The Set Permissions page is displayed.

  6. Optional: Assign the client the permission to the NFS share.
    1. Select a client that you want to set NFS share in Client List.

      Click Add to create a client if there is no one in the client list. For details, please refer to Adding an NFS Share Client.

    2. Click Next.
  7. Confirm that you want to create the NFS share.
    1. Confirm your settings of the NFS share to be created, and click Finish.

      The Execution Result dialog box is displayed indicating that the operation succeeded.

    2. Click Close.

Adding an NFS Share Client

An NFS share client enables client users to access shared file systems using a network.

Prerequisites

  • Associated configurations have been completed, and required data is ready.
  • Create an available host name on the DNS in advance if you need to add a client of Host type.
  • Create an available network group name on the LDAP or NIS server in advance if you need to add a client of Network Group type.

Procedure

  1. Log in to DeviceManager.
  2. Choose Provisioning > Share > NFS (Linux/UNIX/MAC).
  3. Select the NFS share for which you want to add a client.
  4. In the Client List area, click Add.

    The Add Client dialog box is displayed.

  5. Configure the client properties. Table 3-23 describes related parameters.



    Table 3-23  NFS share client properties

    Parameter

    Description

    Value

    Type

    Client type of the NFS share. Types include:
    • Host

      Applicable to the client in non-domain environment.

    • Network group

      Applicable to client in LDAP or NIS domain.

    NOTE:

    When a client is included in multiple share permissions, the priority of share authentication from high to low is in the following sequence: host name > IP address > IP network > wildcard > network group > * (anonymous).

    [Default value]

    Host

    Name or IP Address

    Name or service IP address of the NFS share client.
    NOTE:

    This parameter is available only when the Type is Host.

    [Value range]

    The name:
    • Contains 1 to 255 characters, including letters, digits, hyphens (-), periods (.), and underscores (_).
    • The value can begin with only a digit or letter and cannot end with a hyphen (-) or an underscore (_).
    • The value cannot contain consecutive periods (.), pure digits, or a period before or after an underscore or hyphen, for example, '_.', '._', '.-', or '-.'.
    The IP address:
    • You can enter the IP address of a client or the IP address segment of clients, or use asterisk (*) to represent IP addresses of all clients.
    • You can enter IPv4, IPv6 or their mixed IP address.
    • The mask of IPv4 ranges from 1 to 32. The mask of IPv6 ranges from 1 to 128.

    [Example]

    192.168.0.10

    192.168.0.10;192.168.1.0/24

    NOTE:

    You can enter multiple names or IP addresses of clients, separated by semicolon.

    Network Group Name

    Network group name in LDAP or NIS domain.
    NOTE:

    This parameter is available only when the Type is Network group.

    [Value range]

    The name:
    • Contains 1 to 254 characters.
    • Can contain only letters, digits, underscores (_), periods (.), and hyphens (-).

    [Example]

    a123456

    Permission

    The permission for client to access the NFS share. The permissions include:
    • Read-only

      Only reading the files in the share is allowed.

    • Read-write

      Any operation is allowed.

    [Default value]

    Read-only

    Write Mode (Optional)

    Write mode of the NFS share client. The modes include:
    • Synchronous: the data written to the share is written into the disk immediately.
    • Asynchronous: the data written to the share is written into the cache first, then into the disk.
    NOTE:

    Data may be lost if a client mounts an NFS share in asynchronous mode, and the client and storage system are faulty at the same time.

    [Default value]

    Synchronous

    Permission Constraint (Optional)

    Determine whether to retain the user identity (UID) and group ID (GID) of a shared directory.
    • all_squash: The user ID (UID) and group ID (GID) of a shared directory are mapped to user nobody, which are applicable to public directories.
    • no_all_squash: The UID and GID of a shared directory are reserved.

    [Default value]

    no_all_squash

    root Permission Constraint (Optional)

    Control the root permission of a client.
    • root_squash: The client cannot access the storage system as user root. If a client accesses the storage system as user root, the client will be mapped as user nobody.
    • no_root_squash: A client can access the storage system as user root and user root can fully manage and access the root directory.

    [Default value]

    root_squash

  6. Confirm the addition of the NFS Share Client.
    1. Click OK.

      The Execution Result dialog box is displayed, indicating that the operation succeeded.

    2. Click Close.

Accessing NFS Share

This section describes how a client accesses an NFS shared file system. The operating systems that support the client in accessing NFS shared file systems include SUSE, Red Hat, HP-UX, SUN Solaris, IBM AIX, and Mac OS, etc. Operations used by a client to access an NFS share in an LDAP domain and NIS domain are the same as those used in a non-domain environment.

Accessing an NFS Shared File System by a SUSE or Red Hat Client
  1. Log in to the client as user root.
  2. Run showmount -e ipaddress to view available NFS shared file systems of the storage system.

    ipaddress represents the logical IP address of the storage system. 172.16.128.10 is used as an example, for details to view the logical IP address, see Viewing Logical Port Details.

    #showmount -e 172.16.128.10
    Export list for 172.16.128.10
    /nfstest *
    #
    

    NOTE:

    /nfstest in the output represents the path of the NFS share created in the storage system.

  3. Run mount -t nfs -o vers=n,proto=m,rsize=o,wsize=p,hard,intr,timeo=q ipaddress:filesystem /mnt to mount the NFS shared file system. Table 3-24 describes the related parameters.

    #mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600 172.16.128.10:/nfstest /mnt
    Table 3-24   SUSE/Red Hat mount NFS shares parameters

    Parameter

    Description

    Example

    o

    Option that nfs mount, including ro, rw and so on.
    • ro: Mount a share that only be read.
    • rw: Mount a share that can be read and write.

    The default value is rw.

    vers

    The NFS version. The value can be 3 or 4.

    In a scenario where the NFS v4 sharing protocol is used, a single-controller switchover may interrupt services. In a environment that requires high reliability, you are advised to use NFS v3.

    proto

    The transfer protocol. The value can be tcp or udp.

    tcp

    rsize

    The number of bytes NFS uses when reading files from an NFS server. The unit is byte.

    Recommended to use 1048576, and recommended to use 16384 for Red Hat 7.

    wsize

    The number of bytes NFS uses when writing files to an NFS server, The unit is byte.

    Recommended to use 1048576

    timeo

    The retransmission interval upon timeout. The unit is one tenths of a second

    Recommended to use 600

    filesystem

    The path of the NFS share created in the storage system.

    -

  4. Run mount to verify that the NFS shared file system has been mounted to the local computer.

    #mount
    172.16.128.10:/nfstest on /mnt type nfs (rw,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600,addr=172.16.128.10)
    

    When the previous output appears, the NFS shared file system has been successfully mounted to the local computer. If the actual output differs from the previous output, contact technical support engineers.

Accessing an NFS Shared File System by an HP-UX or SUN Solaris Client
  1. Log in to the client as user root.
  2. Run showmount -e ipaddress to view available NFS shared file systems of the storage system.

    ipaddress represents the logical IP address of the storage system. 172.16.128.10 is used as an example, for details to view the logical IP address, see Viewing Logical Port Details.

    #showmount -e 172.16.128.10
    Export list for 172.16.128.10
    /nfstest *
    #
    
    NOTE:

    /nfstest in the output represents the path of the NFS share created in the storage system.

  3. Run mount [-F nfs|-f nfs] -o vers=n,proto=m ipaddress:filesystem /mnt to mount the NFS shared file system. Table 3-25 describes the related parameters.

    #mount -f nfs -o vers=3,proto=tcp 172.16.128.10:/nfstest /mnt
    Table 3-25  HP-UX or SUN Solaris mount NFS shares parameters

    Parameter

    Description

    Example

    -F nfs or -f nfs

    Optional.

    -F nfs is available to the HP-UX client and -f nfs to the Solaris client.

    vers

    The NFS version. The value can be 3 or 4.

    In a scenario where the NFS v4 sharing protocol is used, a single-controller switchover may interrupt services. In a environment that requires high reliability, you are advised to use NFS v3.

    proto

    The transfer protocol. The value can be tcp or udp.

    tcp

    filesystem

    The path of the NFS share created in the storage system.

    -

  4. Run mount to verify that the NFS shared file system has been mounted to the local computer.

    #mount
    172.16.128.10:/nfstest on /mnt type nfs (rw,vers=3,proto=tcp,addr=172.16.128.10)
    

    When the previous output appears, the NFS shared file system has been successfully mounted to the local computer. If the actual output differs from the previous output, contact technical support engineers.

Accessing an NFS Shared File System by an IBM AIX Client
  1. Log in to the client as user root.
  2. Run showmount -e ipaddress to view available NFS shared file systems of the storage system.

    ipaddress represents the logical IP address of the storage system. 172.16.128.10 is used as an example, for details to view the logical IP address, see Viewing Logical Port Details.

    #showmount -e 172.16.128.10
    Export list for 172.16.128.10
    /nfstest *
    #
    
    NOTE:

    /nfstest in the output represents the path of the NFS share created in the storage system.

  3. Run mount ipaddress:filesystem /mnt to mount the NFS shared file system.

    NOTE:

    filesystem represents the path of the NFS share created in the storage system.

    #mount 172.16.128.10:/nfstest /mnt
    mount: 1831-008 giving up on:
    172.16.128.10:/nfstest 
    Vmount: Operation not permitted.
    #
    
    NOTE:

    If the AIX client fails to mount the NFS shared file system after the command is executed, this is because the default NFS ports of AIX and Linux are inconsistent. Run the following command to solve this problem.

    #nfso -o nfs_use_reserved_ports=1
    Setting nfs_use_reserved_ports to 1
    

  4. Run mount to verify that the NFS shared file system has been mounted to the local computer.

    #mount
    172.16.128.10:/nfstest on /mnt type nfs (rw,addr=172.16.128.10)
    

    When the previous output appears, the NFS shared file system has been successfully mounted to the local computer. If the actual output differs from the previous output, contact technical support engineers.

Accessing an NFS Shared File System by a Mac OS Client
  1. Run showmount -e ipaddress to view available NFS shared file systems of the storage system.

    ipaddress represents the logical IP address of the storage system. 172.16.128.10 is used as an example, for details to view the logical IP address, see Viewing Logical Port Details.

    Volumes root# showmount -e 172.16.128.10
    /nfstest *
    
    NOTE:

    /nfstest in the output represents the path of the NFS share created in the storage system.

  2. Run sudo /sbin/mount_nfs -P ipaddress:filesystem /Volumes/mount1 to mount the NFS shared file system.

    NOTE:

    filesystem represents the path of the NFS share created in the storage system.

    Volumes root# sudo /sbin/mount_nfs -P 172.16.128.10:/nfstest /Volumes/mount1

  3. Run mount to verify that the NFS shared file system has been mounted to the local computer.

    Volumes root# mount
    /dev/disk0s2 on / (hfs, local, journaled)
    devfs on /dev (devfs, local)
    fdesc on /dev (fdesc, union)
    map -hosts on /net (autofs, automounted)
    map auto_home on /home (autofs, automounted)
    172.16.128.10:/nfstest on /Volumes/mount1 (nfs)
    

    When the previous output appears, the NFS shared file system has been successfully mounted to the local computer. If the actual output differs from the previous output, contact technical support engineers.

Accessing an NFS Shared File System by a VMware Client
NOTE:

When you want to create virtual machines on the NFS share, The Root Permission Constraint of the NFS share must be no_root_squash.

  1. Log in to VMware vSphere Client.
  2. Choose Localhost > Configuration > Storage > Add Storage.

    The Add Storage wizard is displayed.

  3. In Select Storage Type, select Network File System. Then, click Next.

    The Locate Network File System page is displayed.

  4. Set parameters. Table 3-26 describes related parameters.

    Table 3-26  Parameters for adding an NFS share in VMware

    Parameter

    Description

    Value

    Server

    Logical IP address of the storage system, for details to view the logical IP address, see Viewing Logical Port Details.

    Example

    172.16.128.10

    Folder

    Path of the NFS share created in the storage system.

    Example

    /nfstest

    Datastore Name

    Name of the NFS share in VMware.

    Example

    data

  5. Click Next.
  6. Confirm the information and click Finish.
  7. On the Configuration tab page, view the newly added NFS share.
Postrequisite

If you modify NFS user information when using the client to access NFS shares, new user authentication information cannot take effect immediately. Wait 30 minutes for the modification to take effect.

Translation
Download
Updated: 2019-02-01

Document ID: EDOC1000084198

Views: 39519

Downloads: 827

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next