No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 2600 V3 Video Surveillance Edition V300R006 Basic Storage Service Configuration Guide for File

This document is applicable to OceanStor OceanStor 2600 V3 Video Surveillance Edition. This document describes the basic storage services and explains how to configure and manage basic storage services for storage system.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Configuring an NFS Share

Configuring an NFS Share

This section describes how to configure an NFS share.

Configuration Process

Figure 3-2 shows the NFS share configuration process.

Figure 3-2 NFS share configuration process

Preparing Data

Before configuring an NFS share in a storage system, plan and collect required data to facilitate follow-up service configurations.

You need to prepare the following data:

  • Logical IP address

    Logical IP address used by a storage system to provide shared space for clients.

  • File system

    File system shared through the NFS share.

  • LDAP or NIS domain information
  • Permission

    The permissions include read-only and read-write.

    • Read-only: Clients have the read-only permission for the NFS share.
    • Read-write: Clients have the read and write permissions for the NFS share.
NOTE:

You can contact your network administrator to obtain desired data.

Checking the License File

Each value-added feature requires a license file for activation. Before configuring a value-added feature, ensure that the feature's license file is valid.

Context

On DeviceManager, the NFS feature is displayed in Feature as NFS Protocol.

Procedure
  1. Log in to DeviceManager.
  2. Choose Settings > License Management.
  3. Check the active license files.

    • For V300R006C20, perform the following steps to check the activated license file:
      1. In the navigation tree on the left, choose Active License.
      2. In the middle information pane, verify the information about active license files.
    • For V300R006C30 and later versions, you can view all activated license files in the function pane at the lower part of the License Management page.

Follow-up Procedure
  • If no license for the feature is available, apply for and import a license file. For details about how to apply for and import a license file, see Installation Guide.
  • If the storage system generates an alarm indicating that the license has expired, obtain and import the license again.

Configuring a Network

Before configuring shared services, plan and configure a network properly for accessing and managing file services.

(Optional) Bonding Ethernet Ports

This section describes how to bond Ethernet ports on a same controller.

Prerequisites

Ethernet ports to be bonded are not configured with any IP addresses.

Context
  • Port bonding provides more bandwidth and link redundancy. Although ports are bonded, each host still transmits data through a single port and the total bandwidth can be increased only when there are multiple hosts. Determine whether to bond ports based on site requirements.
  • Port bonding on a storage system has the following restrictions:
    • Only Ethernet ports with the same rate (GE or 10GE) on a same controller can be bonded. A maximum of eight Ethernet ports can be bonded as a bond port.
    • Ethernet ports on a SmartIO interface module cannot be bonded if they are in cluster or FC mode or run FCoE service in FCoE/iSCSI mode.
    • The MTU of bonded SmartIO ports must be the same as that of the hosts.
    • Read-only users are unable to bond Ethernet ports.
    • A port can only be added to one bond port.
    • A member in a port group cannot be added to a bond port.
  • After Ethernet ports are bonded, MTU changes to the default value and you must set the link aggregation mode for the ports. On Huawei switches, you must set the ports to work in static LACP mode.

    The link aggregation modes vary with switch manufacturers. If a non-Huawei switch is used, contact technical support of the switch manufacturer for specific link aggregation configurations.

Procedure
  1. Log in to DeviceManager.
  2. Choose Provisioning > Port > Bond Ports.
  3. Click Create.

    The Create Bond Port dialog box is displayed.

  4. Set the name, controller, interface module, and optional ports that can be bonded.

    1. Specify Name for the bond port.

      The name:

      • Contains only letters, digits, underscores (_), periods (.), and hyphens (-).
      • Contains 1 to 31 characters.
    2. From Controller, select the owning controller of the Ethernet ports to be bonded.
    3. Specify Interface Module.
    4. From the Optional port list, select the Ethernet ports you want to bond.
      NOTE:
      • Select at least two ports.
      • The port name format is controller enclosure ID.interface module ID.port ID.
    5. Click OK.

      The security alert dialog box is displayed.

  5. Confirm the bonding of the Ethernet ports.

    1. Confirm the information in the dialog box and select I have read and understand the consequences associated with performing this operation.
    2. Click OK.

      The Success dialog box is displayed, indicating that the operation succeeded.

    3. Click OK.

(Optional) Creating a VLAN

Ethernet ports and bond ports on a storage system can be added into multiple independent VLANs. You can configure different services in different VLANs to ensure the security and reliability of service data.

Prerequisites

The Ethernet ports for which you want to create VLANs have not been assigned IP addresses or used for networking.

Procedure
  1. Log in to DeviceManager.
  2. Choose Provisioning > Port > VLAN.
  3. Click Create.

    The Create VLAN dialog box is displayed.

  4. Select the type of ports used to create VLANs from the Port Type drop-down list.

    Port Type can be Ethernet port or Bond port.

  5. In the port list, select the desired Ethernet port or bond port.
  6. In ID, enter the VLAN ID and click Add.

    NOTE:
    • The VLAN ID ranges from 1 to 4094. You can enter a single VLAN ID or VLAN IDs in batches in the format of "start ID-end ID".
    • To remove a VLAN ID, select it and click Remove.

  7. Click OK.

    The Execution Result dialog box is displayed, indicating that the operation succeeded.

  8. Click Close.
Creating a Logical Port

This section describes how to create a logical port for managing and accessing files based on Ethernet ports, bond ports, or VLANs.

Context

The logical ports are virtual ports that carry host services. A unique IP address is allocated to each logical port for carrying services.

Procedure
  1. Log in to DeviceManager.
  2. Choose Provisioning > Port > Logical Ports.
  3. Click Create.

    The Create Logical Port dialog box is displayed.

  4. In the Create Logical Port dialog box, configure related parameters.

    Table 3-8 describes the related parameters.

    NOTE:

    GUIs may vary with product versions and models. The actual GUIs prevail.

    Table 3-8 Logical port parameters

    Parameter

    Description

    Value

    Name

    Name of the logical port.

    The name:

    • Must be unique.
    • Can contain only letters, digits, underscores (_), periods (.), and hyphens (-).
    • Must contain 1 to 31 characters.

    [Example]

    Lif01

    IP Address Type

    IP address type of the logical port, including IPv4 Address and IPv6 Address.

    [Example]

    IPv4 Address

    IPv4 Address

    IPv4 address of the logical port.

    [Example]

    192.168.50.16

    Subnet Mask

    IPv4 subnet mask of the logical port.

    [Example]

    255.255.255.0

    IPv4 Gateway

    IPv4 gateway of the logical port.

    [Example]

    192.168.50.1

    IPv6 Address

    IPv6 address of the logical port.

    [Example]

    fc00::1234

    Prefix

    IPv6 prefix length of the logical port.

    [Example]

    64

    IPv6 Gateway

    IPv6 gateway of the logical port.

    [Example]

    fc00::1

    Home Port

    Port to which the logical port belongs, including Ethernet port, Bond port, and VLAN.

    [Example]

    CTE0.A.IOM0.P0

    Failover Group

    Failover group name.

    NOTE:
    • If a failover group is specified, services on the failed home port will be taken over by a port in the specified failover group.
    • If no failover group is specified, services on the failed home port will be taken over by a port in the default failover group.

    [Example]

    System-defined

    IP Address Failover

    After IP address failover is enabled, services fail over to other normal ports within the failover group if the home port fails. In addition, the IP address used by services remains unchanged.

    NOTE:

    Shares of file systems do not support the multipathing mode. IP address failover is used to improve reliability of links.

    [Example]

    Enable

    Failback Mode

    Mode in which services fail back to the home port after the home port is recovered. The mode can be Manual or Automatic.

    NOTE:
    • If Failback Mode is Manual, you need to ensure that the link to the home port is normal before the failback. Services will manually fail back to the home port only when the link to the home port keeps normal for over five minutes.
    • If Failback Mode is Automatic, ensure that the link to the home port is normal before the failback. Services will automatically fail back to the home port only when the link to the home port keeps normal for over five minutes.

    [Example]

    Automatic

    Activate Now

    To activate the logical port immediately.

    [Example]

    Enable

    Role

    Roles of the logical ports, including:

    • Management: The port is used by a super administrator to log in to the system for management.
    • Service: The port is used by a super administrator to access services such as CIFS shares.
    • Management+Service: The port is used by a super administrator to log in to the system to manage the system and access services.

    [Example]

    Service

    Dynamic DNS

    When dynamic DNS is enabled, the DNS service will automatically and periodically update the IP address configured for the logical port.

    [Example]

    Enable

    Listen DNS Query Request

    After this function is enabled, external NEs can access the DNS service provided by the storage system by using the IP address of this logical port.

    [Example]

    Disabled

    DNS Zone

    Name of the DNS zone.

    NOTE:
    • If you do not specify this parameter, the logical port will not used for DNS-based load balancing.
    • Only the logical ports whose Role is Service or Management+Service can be added to a DNS zone. The logical ports whose Role is Management cannot be added to a DNS zone.
    • One logical port can be associated with only one DNS zone. One DNS zone can be associated with multiple logical ports.
    • A DNS zone can be associated with both IPv4 and IPv6 logical ports.
    • The load balancing effect varies with the distribution of logical ports associated with a DNS zone. To obtain a better load balancing effect, ensure that logical ports associated with a DNS zone are evenly distributed among controllers.

    [Example]

    None

  5. Click OK.

    The Success dialog box is displayed, indicating that the logical port has been successfully created.

  6. Click OK.
(Optional) Configuring DNS-based Load Balancing Parameters

The DNS-based load balancing feature can detect loads on various IP addresses on a storage system in real time and use a proper IP address as the DNS response to achieve load balancing among IP addresses.

Context

Working principle:

  1. When a host accesses the NAS service of a storage system using a domain name, the host first sends a DNS request to the built-in DNS server and the DNS server obtains the IP address according to the domain name.
  2. If the domain name contains multiple IP addresses, the storage system selects the IP address with a light load as the DNS response based on the configured load balancing policy and returns the DNS response to the host.
  3. After receiving the DNS response, the host sends a service request to the destination IP address.
Procedure
  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > File Storage Service > DNS-based Load Balancing.

    Table 3-9 lists parameters related to DNS-based load balancing.
    Table 3-9 DNS-based load balancing parameters

    Parameter

    Description

    Value

    DNS-based Load Balancing

    Enables or disables DNS-based load balancing.

    NOTE:
    • When enabling the DNS-based load balancing function, you are advised to disable the GNS forwarding function. This function affects DNS-based load balancing.
    • After the DNS-based load balancing function is disabled, the domain name resolution service is unavailable and file systems cannot use the function.
    • This parameter can be set only in the system view, not in the vStore view. The setting takes effect for the entire storage system.

    [Example]

    Enabled

    Load Balancing Policy

    Specifies a DNS-based load balancing policy. The following load balancing policies are available:

    • Weighted round robin: When a client uses a domain name to initiate an access request, the storage system calculates the weight based on the performance data. Under the same domain name, IP addresses that are required to process loads have the same probability to be selected to process client services.
    • CPU usage: When a client uses a domain name to initiate an access request, the storage system calculates the weight based on the CPU usage of each node. Using the weight as the probability reference, the storage system selects a node to process the client's service request.
    • Bandwidth usage: When a client uses a domain name to initiate an access request, the storage system calculates the weight based on the total bandwidth usage of each node. Using the weight as the probability reference, the storage system selects a node to process the client's service request.
    • Open connections: When a client uses a domain name to initiate an access request, the storage system calculates the weight based on the NAS connections of each node. Using the weight as the probability reference, the storage system selects a node to process the client's service request.
    • Overall load: When a client uses a domain name to initiate an access request, the storage system selects a node to process the client's service request based on the comprehensive load. The comprehensive node load is calculated based on the CPU usage, bandwidth usage, and number of NAS connections. Less loaded nodes are more likely to be selected.
    NOTE:

    This parameter can be set only in the system view, not in the vStore view. The setting takes effect for the entire storage system.

    [Example]

    Weighted round robin

  3. Configure a DNS zone.

    A DNS zone contains IP addresses of a group of logical ports. A host can use the name of a DNS zone to access shared services provided by a storage system. Services can be evenly distributed to logical ports.

    NOTE:

    Only the logical ports whose Role is Service or Management+Service can be added to a DNS zone. The logical ports whose Role is Management cannot be added to a DNS zone.

    1. Add a DNS zone.
      1. Click Add.
      2. The Add DNS Zone dialog box is displayed. In Domain Name, type the domain name of the DNS zone you want to add and click OK.
      NOTE:

      The domain name complexity requirements are as follows:

      • The domain name can contain 1 to 255 characters and consists of multiple labels separated by periods (.).
      • A label can contain 1 to 63 characters including letters, digits, hyphens (-), and underscores (_), and must start and end with a letter or a digit.
      • The domain name must be unique.
    2. Remove a DNS zone.
      1. In the DNS zones that are displayed, select a DNS zone you want to remove.
      2. Click Remove.
    3. Modify a DNS zone.
      1. In the DNS zones that are displayed, select a DNS zone you want to modify.
      2. Click Modify.
      3. The Modify DNS Zone dialog box is displayed. In Domain Name, type the domain name of the DNS zone you want to modify and click OK.
    4. View a DNS zone.
      1. In DNS Zone, type a keyword and click Search.
      2. In DNS Zone, the DNS zone names relevant to the keyword will be displayed.
    NOTE:

    You can select a DNS zone to modify or remove it.

  4. Click Save.

    The Warning dialog box is displayed.

  5. Confirm the information in the dialog box and select I have read and understand the consequences associated with performing this operation.
  6. Click OK.

    The Execution Result page is displayed.

  7. On the Execution Result page, confirm the modification and click Close. The DNS zone configuration is complete.
Follow-up Procedure

After associating logical ports with a DNS zone, configuring logical ports to listen to DNS requests, setting a DNS-based load balancing policy, and enabling DNS-based load balancing, you need to configure DNS server addresses on clients. For details about how to configure and use DNS-based load balancing, see How Can I Configure and Use DNS-based Load Balancing?

(Optional) Managing the Routes of a Logical Port

When configuring share access, ensure that the logical port can ping the IP addresses of the domain controller, DNS server, and clients. If the ping test fails, add routes from the IP address of the logical port to the network segment of the domain controller, DNS server, or clients.

Prerequisites

The logical port has been assigned an IP address.

Procedure
  1. Log in to DeviceManager.
  2. Choose Provisioning > Port > Logical Ports.
  3. Select the logical port for which you want to add a route and click Route Management.

    The Route Management dialog box is displayed.

  4. Configure the route information for the logical port.

    1. Click Add.

      The Add Route dialog box is displayed.

    The default IP addresses of the internal heartbeat on a dual-controller storage system are 127.127.127.10 and 127.127.127.11, and those on a four-controller storage system are 127.127.127.10, 127.127.127.11, 127.127.127.12, and 127.127.127.13. Therefore, the destination address cannot fall within the 127.127.127.XXX segment. Besides, the IP address of the gateway cannot be 127.127.127.10, 127.127.127.11, 127.127.127.12, or 127.127.127.13. Otherwise, routing will fail. (Internal heartbeat links are established between controllers for these controllers to detect each other's working status. You do not need to separately connect cables. In addition, internal heartbeat IP addresses have been assigned before delivery, and you cannot change these IP addresses).

    1. In Type, select the type of the route to be added.

      Possible values are Default route, Host route, and Network segment route.

    2. Set Destination Address.
      • If IP Address is an IPv4 address, set Destination Address to the IPv4 address or network segment of the application server's service network port or that of the other storage system's logical port.
      • If IP Address is an IPv6 address, set Destination Address to the IPv6 address or network segment of the application server's service network port or that of the other storage system's logical port.
    3. Set Destination Mask (IPv4) or Prefix (IPv6).
      • Destination Mask specifies the subnet mask of the IPv4 address for the service network port on the application server or storage device.
      • Prefix specifies the prefix of the IPv6 address for application server's service network port or that of the other storage system's logical port.
    4. In Gateway, enter the gateway for the IP address of the local storage system's logical port.

  5. Click OK. The route information is added to the route list.

    The security alert dialog box is displayed.

  6. Confirm the information in the dialog box and select I have read and understand the consequences associated with performing this operation.
  7. Click OK.

    The Success dialog box is displayed, indicating that the operation succeeded.

    NOTE:

    To remove a route, select it and click Remove.

  8. Click Close.

Setting the NFS Service

This section describes how to set the NFS service.

Setting the NFS Service (Applicable to V300R006C20)

Before configuring an NFS share, enable and configure the NFS service.

Prerequisites

The license for the NFS protocol has been imported and activated.

Context

Storage systems support NFSv3 and NFSv4.

Procedure
  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > File Storage Service > NFS Service.
  3. Enable the NFS service according to the protocol version used by the host to mount NFS shares.

    • If the host uses NFSv3 to mount shares, select Enable NFSv3.
    • If the host uses NFSv4 to mount shares, perform the following operations:
      1. Click Advanced and select Enable NFSv4.
      2. Enter the storage domain name in Domain Name.
    NOTE:
    • NFSv4 uses the user name + domain name mapping mechanism, enhancing the security of clients' access to shared resources. It is recommended that hosts use NFSv4 to mount shares.
    • In a non-domain or LDAP environment, enter the default domain name localdomain.
    • In an NIS environment, the entered domain name must be consistent with the domain name in the /etc/idmapd.conf file on the Linux client that accesses shares. It is recommended that both the two be the NIS domain name.
    • The domain name must contain 1 to 64 characters.
    • To disable the NFS service, deselect Enable NFSv3 or Enable NFSv4.

  4. Click Save.

    The Success dialog box is displayed, indicating that the operation succeeded.

  5. Click OK.
Setting the NFS Service (Applicable to V300R006C30 and Later Versions)

Before configuring an NFS share, enable and configure the NFS service.

Prerequisites

The license for the NFS protocol has been imported and activated.

Context

Storage systems support NFSv3, NFSv4.0, and NFSv4.1.

Procedure
  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > File Storage Service > NFS Service.
  3. Enable the NFS service according to the protocol version used by the host to mount NFS shares.

    • If the host uses NFSv3 to mount shares, select Enable NFSv3.
    • If the host uses NFSv4.0 or NFSv4.1 to mount shares, perform the following operations:
      1. Click Advanced and select Enable NFSv4 or Enable NFSv4.1.
      2. Enter the storage domain name in Domain Name.
    NOTE:
    • Before enabling NFSv4.1, verify that the operating system of the host connecting to the storage system is compatible with the storage system. You can use the Huawei Storage Interoperability Navigator to query the compatibility. If the operating system is not compatible with the storage system, the service continuity cannot be ensured.
    • NFSv4.0 and NFSv4.1 use the user name + domain name mapping mechanism, enhancing the security of clients' access to shared resources. It is recommended that hosts use NFSv4.0 or NFSv4.1 to mount shares.
    • In a non-domain or LDAP environment, enter the default domain name localdomain.
    • In an NIS environment, the entered domain name must be consistent with the domain name in the /etc/idmapd.conf file on the Linux client that accesses shares. It is recommended that both the two be the NIS domain name.
    • The domain name must contain 1 to 64 characters.
    • To disable the NFS service, deselect Enable NFSv3, Enable NFSv4, or Enable NFSv4.1.

  4. Click Save.

    The Success dialog box is displayed, indicating that the operation succeeded.

  5. Click OK.

(Optional) Adding a Storage System to an LDAP Domain

This section describes how to add a storage system to an LDAP domain.

Preparing LDAP Domain Configuration Data

Before adding a storage system to an LDAP domain, collect the configuration data of an LDAP domain server.

LDAP Domain Parameters

LDAP data is organized in a tree structure that clearly lays out organizational information. A node on this tree is called an entry. Each entry has a distinguished name (DN). The DN of an entry is composed of a base DN and relative DNs (RDNs). The base DN refers to the position of the parent node where the entry resides on the tree, and the RDN refers to an attribute that distinguishes the entry from others such as UID or CN.

LDAP directories function as file system directories. For example, directory dc=redmond,dc=wa,dc=microsoft,dc=com can be regarded as the following path of a file system directory: com\microsoft\wa\redmond. In another example of directory cn=user1,ou=user,dc=example,dc=com, cn=user1 indicates a user name and ou=user indicates the organization unit of an Active Directory (AD), that is, user1 is in the user organization unit of the example.com domain.

The following figure shows the data structure of an LDAP server:

Figure 3-3 Data structure of an LDAP server

Table 3-10 defines LDAP entry acronyms.

Table 3-10 LDAP entry definitions

Acronym

Meaning

o

Organization

ou

Organization unit

c

Country name

dc

Domain component

sn

Surname

cn

Common name

What Is OpenLDAP?

OpenLDAP is an open implementation of LDAP that is now widely used in various popular Linux releases.

OpenLDAP consists of the following four components:

  • slapd: an independent LDAP daemon
  • slurpd: an independent LDAP update and replication daemon
  • Library implementing LDAP
  • Tool software and illustration client

The OpenLDAP website does not provide OpenLDAP installation packages for Windows operating systems. You can obtain OpenLDAP installation packages for the following Windows operating systems from the Userbooster website: Windows XP, Windows Server 2003, Windows Server 2008, Windows Vista, Windows 7, Windows 8, and Windows Server 2012.

Obtaining LDAP Configuration Data in Windows

The following describes how to obtain LDAP configuration data in Windows using OpenLDAP as an example:

  1. Go to the OpenLDAP installation directory.
  2. Find the slapd.conf system configuration file.
  3. Use text editing software to open the configuration file and search for the following fields:

    suffix "dc=example,dc=com" 
    rootdn  "cn=Manager,dc=example,dc=com" 
     
    rootpw    XXXXXXXXXXXX     

    Mappings between the fields and parameters on the storage system configuration page are as follows:

    • dc=example,dc=com corresponds to Base DN.
    • cn=Manager,dc=example,dc=com corresponds to Bind DN.
    • XXXXXXXXXXXX corresponds to Bind Password. If the password is in ciphertext, contact LDAP server administrators to obtain the password.

  4. Find configuration files (.ldif files) of the users and user groups that need to access the storage system.

    NOTE:

    LDAP Interchange Format (LDIF) is one of the most common file formats for LDAP applications. It is a standard mechanism that represents directories in the text format. It allows users to import data to and export data from the directory server. LDIF files store LDAP configurations and directory contents, and therefore can provide you with related information.

  5. Use text editing software to open the configuration file and find the DNs of a user and a user group that correspond to User Directory and Group Directory respectively on the storage system configuration page.

    #root on the top 
    dn: dc=example,dc=com 
    dc: example 
    objectClass: domain 
    objectClass: top 
    #First organization unit name: user 
    dn: ou=user,dc=example,dc=com 
    ou: user 
    objectClass: organizationalUnit 
    objectClass: top 
    #Second organization unit name: groups 
    dn: ou=group,dc=example,dc=com 
    ou: groups 
    objectClass: organizationalUnit 
    objectClass: top 
    #The first user represents user1 that belongs to organization unit user in the organizational structure topology. 
    dn: cn=user1,ou=user,dc=example,dc=com 
    cn: user1 
    objectClass: posixAccount 
    objectClass: shadowAccount 
    objectClass: inetOrgPerson 
    sn: user1 
    uid: user1 
    uidNumber: 2882 
    gidNumber: 888 
    homeDirectory: /export/home/ldapuser 
    loginShell: /bin/bash 
    userPassword: {ssha}eoWxtWNl8YbqsulnwFwKMw90Cx5BSU9DRA==xxxxxx 
    #The second user represents user2 that belongs to organization unit user in the organizational structure topology. 
    dn: cn=user2,ou=user,dc=example,dc=com 
    cn: user2 
    objectClass: posixAccount 
    objectClass: shadowAccount 
    objectClass: inetOrgPerson 
    sn: client 
    uid: client 
    uidNumber: 2883 
    gidNumber: 888 
    homeDirectory: /export/home/client 
    loginShell: /bin/bash 
    userPassword: {ssha}eoWxtWNl8YbqsulnwFwKMw90Cx5BSU9DRA==xxxxxx 
    #The first user group represents group1 that belongs to organization unit group in the organizational structure topology. The group contains user1 and user2. 
    dn: cn=group1,ou=group,dc=example,dc=com 
    cn: group1 
    gidNumber: 888 
    memberUid: user1#Belongs to the group. 
    memberUid: user2#Belongs to the group. 
    objectClass: posixGroup     

Obtaining LDAP Configuration Data in Linux

The following describes how to obtain LDAP configuration data in Linux using OpenLDAP as an example:

  1. Log in to an LDAP server as user root.
  2. Run the cd /etc/openldap command to go to the /etc/openldap directory.

    linux-ldap:~ # cd /etc/openldap 
    linux-ldap:/etc/openldap #

  3. Run the ls command to view the system configuration file slapd.conf and the configuration files (.ldif files) of the users and user groups who want to access the storage system.

    linux-ldap:/etc/openldap #ls 
    example.ldif ldap.conf schema slap.conf slap.con slapd.conf

  4. Run the cat command to open the system configuration file slapd.conf where you can view related parameters.

    linux-ldap:/etc/openldap #cat slapd.conf 
     
    suffix "dc=example,dc=com" 
    rootdn  "cn=Manager,dc=example,dc=com" 
     
    rootpw    XXXXXXXXXXXX     

    Mappings between the fields and parameters on the storage system configuration page are as follows:

    • dc=example,dc=com corresponds to Base DN.
    • cn=Manager,dc=example,dc=com corresponds to Bind DN.
    • XXXXXXXXXXXX corresponds to Bind Password. If the password is in ciphertext, contact LDAP server administrators to obtain the password.

  5. Run the cat command to open the example.ldif file. Find the DNs of a user and a user group that correspond to User Directory and Group Directory respectively on the storage system configuration page. For details about the parameters, see 5.
Configuring LDAP Domain Authentication Parameters

If an LDAP domain server is deployed on the customers' network, add the storage system to the LDAP domain. After the storage system is added to the LDAP domain, the LDAP domain server can authenticate NFS clients when they attempt to access the storage system that share resources.

Prerequisites
  • An LDAP domain has been set up.
  • Required data has been obtained.
NOTE:
  • The 2000, 5000, and 6000 series storage systems can be connected to an LDAP server through management network ports or service network ports (logical ports). If a storage system communicates with an LDAP server through a management network port, the management network port of each controller must be connected properly to the LDAP server. If a storage system communicates with an LDAP server through a service network port, the service network port of each controller under each vStore must be connected properly to the LDAP server. It is recommended that storage systems use service network ports to connect to an LDAP server.
  • A storage system can be connected to only one LDAP server.
Precautions
  • You are advised to use physical isolation and end-to-end encryption to ensure security of data transfer between clients and LDAP servers.
  • You are advised to configure a static IP address for the LDAP server. If a dynamic IP address is configured, security risks may exist.
Procedure
  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > File Storage Service > Domain Authentication.
  3. In the LDAP Domain Settings area, configure the LDAP domain authentication parameters.

    Table 3-11 describes the related parameters.

    Table 3-11 LDAP domain parameters

    Parameter

    Description

    Value

    Primary Server Address

    IP address or domain name of the primary LDAP domain server.

    NOTE:
    • Ensure that the IP address or domain name is reachable. Otherwise, user authentication commands and network commands will time out.
    • Click Test to check whether the entered IP address or domain name is reachable.

    [Example]

    192.168.1.10

    www.test.com

    Standby Server Address 1

    IP address or domain name of standby LDAP server 1.

    NOTE:
    • Ensure that the IP address or domain name is reachable. Otherwise, user authentication commands and network commands will time out.
    • Click Test to check whether the entered IP address or domain name is reachable.

    [Example]

    192.168.1.11

    www.test.com

    Standby Server Address 2

    IP address or domain name of standby LDAP server 2.

    NOTE:
    • Ensure that the IP address or domain name is reachable. Otherwise, user authentication commands and network commands will time out.
    • Click Test to check whether the entered IP address or domain name is reachable.

    [Example]

    192.168.1.12

    www.test.com

    Port

    Port used by the system to communicate with the LDAP domain server.

    • Default port ID for the LDAP server: 389
    • Default port ID for the LDAPS server: 636

    [Value Range]

    Integer ranging from 1 to 65535.

    [Example]

    636

    Protocol

    Protocol used by the system to communicate with the LDAP domain server.

    • LDAP: indicates that the system uses the standard LDAP protocol to communicate with the LDAP domain server.
    • LDAPS: indicates that the system uses the LDAP over SSL to communicate with the LDAP domain server. You can select this protocol if the LDAP domain server supports SSL.
    NOTE:

    Before selecting the LDAPS protocol, import the CA certificate file for the LDAP domain server. If the LDAP server is required to authenticate the storage system, import the certificate file and private key file. For details, see How Can I Import a CA Certificate File for the LDAP Domain Server?

    [Example]

    LDAPS

    Base DN

    Start DN searching in a LDAP domain.

    [Rule]

    A DN consists of RDNs which are separated by commas (,). An RDN is in the format of key=value. The value must not start with a pound (#) or a space and must not end with a space. For example, testDn=testDn,xxxDn=xxx.

    [Format]

    xxx=yyy, separated by commas (,).

    [Example]

    dc=example,dc=com

    Bind Using the AD Credential

    Enables or disables the bind using the AD credential function.

    [Example]

    Disable

    Bind Authentication Level

    Bind authentication level for LDAP.

    • simple: simple authentication.
    • SASL: simple authentication and security layer

    [Example]

    simple

    User Search Scope

    Search scope for user queries.

    • subtree: searches the named DN directory and subnodes under the DN.
    • onelevel: searches the subnodes under the DN.
    • base: searches the named DN directory.

    [Example]

    subtree

    Group Search Scope

    Search scope for user group queries.

    • subtree: searches the named DN directory and subnodes under the DN.
    • onelevel: searches the subnodes under the DN.
    • base: searches the named DN directory.

    [Example]

    subtree

    Netgroup DN

    Specifies the netgroup DN.

    [Format]

    xxx=yyy, separated by commas (,).

    [Example]

    ou=netgroup,dc=example,dc=com

    Netgroup Search Scope

    Search scope for netgroup queries.

    • subtree: searches the named DN directory and subnodes under the DN.
    • onelevel: searches the subnodes under the DN.
    • base: searches the named DN directory.

    [Example]

    subtree

    Bind DN

    Name of a bond directory.

    NOTE:

    To access content, you must use this directory for searching.

    [Rule]

    A DN consists of RDNs which are separated by commas (,). An RDN is in the format of key=value. The value cannot start with a pound (#) or a space and cannot end with a space. For example, testDn=testDn,xxxDn=xxx.

    [Format]

    xxx=yyy, separated by commas (,).

    [Example]

    cn=Manager,dc=example,dc=com

    Bind Password

    Password for accessing the bond directory.

    NOTE:

    A simple password may cause security risks. A complex password is recommended, for example, a password containing uppercase letters, lowercase letters, digits, and special characters.

    [Example]

    !QAZ2wsx

    Confirm Bind Password

    Confirms the password used by the system to log in to the LDAP domain server.

    [Example]

    !QAZ2wsx

    User Directory

    User DN configured by the LDAP domain server.

    [Example]

    ou=user,dc=admin,dc=com

    Group Directory

    User group DN configured by the LDAP domain server.

    [Example]

    ou=group,dc=admin,dc=com

    Search Timeout Duration (seconds)

    Timeout duration of a client waiting for the search result from the LDAP server. The default value is 3 seconds.

    [Example]

    3

    Connection Timeout Duration (seconds)

    Timeout duration of a client connecting to the LDAP server. The default value is 3 seconds.

    [Example]

    3

    Idle Timeout Duration (seconds)

    Duration after which the LDAP server and client do not communicate with each other, the connection is down. The default value is 30 seconds.

    [Example]

    30

  4. Click Advanced to set the advanced parameters of the LDAP server.

    Table 3-12 shows the related parameters.

    Table 3-12 Advanced parameters

    Parameter

    Description

    Value

    LDAP Schema Template

    The following types of LDAP schema templates are available:

    • RFC2307: schema based on RFC2307.
    • AD_IDMU: schema based on active directory identity management in UNIX.
    NOTE:
    • After you select a schema template, relevant parameters are entered automatically. You can also customize relevant parameters instead of selecting a schema template.
    • A schema defines the structure and rules for LDAP directories and how LDAP servers identify categories, attributes, and other information of LDAP directories.

    [Example]

    RFC2307

    RFC2307 posixAccount Object Class

    Name of the RFC2307 posixAccount object class.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    posixAccount

    [Default value]

    • posixAccount (displayed by default when RFC2307 is selected for LDAP Schema Template)
    • User (displayed by default when AD_IDMU is selected for LDAP Schema Template)

    RFC2307 posixGroup Object Class

    Name of the RFC2307 posixGroup object class.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    posixGroup

    [Default value]

    • posixGroup (displayed by default when RFC2307 is selected for LDAP Schema Template)
    • Group (displayed by default when AD_IDMU is selected for LDAP Schema Template)

    RFC2307 nisNetgroup Object Class

    Name of the RFC2307 nisNetgroup object class.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    nisNetgroup

    [Default value]

    nisNetgroup

    RFC2307 uid Attribute

    Name of the RFC2307 uid attribute.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    uid

    [Default value]

    uid

    RFC2307 uidNumber Attribute

    Name of the RFC2307 uidNumber attribute.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    uidNumber

    [Default value]

    uidNumber

    RFC2307 gidNumber Attribute

    Name of the RFC2307 gidNumber attribute.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    gidNumber

    [Default value]

    gidNumber

    RFC2307 cn (for Groups) Attribute

    Name of the RFC2307 cn (for groups) attribute.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    cn

    [Default value]

    cn

    RFC2307 cn (for Netgroups) Attribute

    Name of the RFC2307 cn (for Netgroups) attribute.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    cn

    [Default value]

    • cn (displayed by default when RFC2307 is selected for LDAP Schema Template)
    • name (displayed by default when AD_IDMU is selected for LDAP Schema Template)

    RFC2307 memberUid Attribute

    Name of the RFC2307 memberUid attribute.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    memberUid

    [Default value]

    memberUid

    RFC2307 memberNisNetgroup Attribute

    Name of the RFC2307 memberNisNetgroup attribute.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    memberNisNetgroup

    [Default value]

    memberNisNetgroup

    RFC2307 nisNetgroup Triple Attribute

    Name of the RFC2307 nisNetgroupTriple attribute.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    nisNetgroupTriple

    [Default value]

    • nisNetgroupTriple (displayed by default when RFC2307 is selected for LDAP Schema Template)
    • NisNetgroupTriple (displayed by default when AD_IDMU is selected for LDAP Schema Template)

    Whether the RFC2307bis is supported

    Whether to enable RFC2307bis.

    [Default value]

    Disable

    RFC2307bis groupOfUniqueNames Object Class

    Name of the RFC2307bis groupOfUniqueNames object class. This parameter is valid only when Whether the RFC2307bis is supported is set to Enable.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    groupOfUniqueName

    [Default value]

    groupOfUniqueName

    RFC2307bis uniqueMember Attribute

    Name of the RFC2307bis uniqueMember attribute. This parameter is valid only when Whether the RFC2307bis is supported is set to Enable.

    [Value range]

    This parameter can be left blank or contain up to 1024 characters.

    [Example]

    uniqueMember

    [Default value]

    uniqueMember

  5. Click Save.

    NOTE:

    You can click Restore to Initial to initialize LDAP domain authentication settings.

(Optional) Adding a Storage System to an NIS Domain

This section describes how to add a storage system to an NIS domain.

Preparing NIS Domain Configuration Data

Before adding a storage system to an NIS domain, collect the configuration data of an NIS server.

Why NIS Domains?

In UNIX shared mode, all nodes that provide the sharing service need to maintain related configuration files such as /etc/hosts and /etc/passwd. For example, if you add a new node to the shared network, all UNIX-based systems need to update their /etc/hosts files to include the name of the new node. If you add a new user who may need to access all nodes, all the systems need to modify their /etc/passwd files. The above operations are time-consuming when the number of nodes is more than 10.

NIS developed by SUN Microsystem uses a single system (NIS server) to manage and maintain the files containing information about host names and user accounts, providing references for all the systems configured as NIS clients. When NIS is used, if you want to add a host to the shared network, you only need to modify a related file on the NIS server and transfer the modification to other nodes on the network.

The following figure shows the relationship between an NIS server and other hosts.

Working Principles

When NIS is configured, the ASCII files in the NIS domain are converted to NIS database files (or mapping table files). Hosts in the NIS domain query and parse the NIS database files to perform operations such as authorized access and updates. For example, common password file /etc/passwd of a UNIX host is converted to the following NIS database files:

Parameters

Default maps for an NIS domain are located in each server's /var/yp/domainname directory. For example, the maps that belong to the domain test.com are located in each server's /var/yp/test.com directory.

The system super administrator can run the /usr/bin/domainname command to rename a domain in interactive mode. Common users can run the domainname command without parameters to obtain the default domain name of the local system.

Data Preparation

Collect Domain Name, Primary Server Address, Standby Server Address 1 (Optional), and Standby Server Address 2 (Optional). For details about how to obtain the data, see Configuring NIS Domain Authentication Parameters.

Configuring NIS Domain Authentication Parameters

If an NIS domain server is deployed on the customers' network, add the storage system to the NIS domain. After the storage system is added to the NIS domain, the NIS domain server can authenticate NFS clients when they attempt to access the system share resources.

Prerequisites
  • An NIS domain has been set up.
  • Required data has been obtained.
NOTE:
  • The 2000, 5000, and 6000 series storage systems can be connected to an NIS server through management network ports or service network ports (logical ports). If a storage system communicates with an NIS server through a management network port, the management network port of each controller must be connected properly to the NIS server. If a storage system communicates with an NIS server through a service network port, the service network port of each controller under each vStore must be connected properly to the NIS server. It is recommended that storage systems use service network ports to connect to an NIS server.
  • A storage system can be connected to only one NIS server.
Precautions

To avoid security risks generated during data transmission between clients and an NIS domain server, you are advised to use a highly secure authentication mode, such as LDAP over SSL (LDAPS) or AD domain+Kerberos authentication, or adopt physical isolation or end-to-end encryption.

Procedure
  1. Log in to DeviceManager.
  2. Choose Settings > Storage Settings > File Storage Service > Domain Authentication.
  3. In the NIS Domain Settings area, select Enable.

    NOTE:

    NIS domain authentication does not support the transfer of encrypted data. Therefore, NIS domain authentication may cause security risks.

  4. Configure NIS domain authentication parameters.

    Table 3-13 describes the related parameters.

    Table 3-13 Parameters of an NIS domain

    Parameter

    Description

    Value

    Domain Name

    NIS domain name.

    [Rule]

    The domain name:

    • Must contain 1 to 63 characters including letters, digits, underscores (_), and hyphens (-).
    • Cannot start or end with a hyphen (-) or an underscore (_).
    • Can contain multiple levels of domain names separated by periods (.). Periods (.) cannot be at the beginning or end. A domain name at each level can contain a maximum of 63 characters.

    [Example]

    test.com

    Primary Server Address

    IP address or domain name of the primary NIS domain server.

    NOTE:
    • Ensure that the IP address or domain name is reachable. Otherwise, user authentication commands and network commands will time out.
    • Click Test to check whether the entered IP address or domain name is reachable.

    [Example]

    192.168.0.100

    www.test.com

    Standby Server Address 1

    IP address or domain name of standby NIS server 1.

    NOTE:
    • Ensure that the IP address or domain name is reachable. Otherwise, user authentication commands and network commands will time out.
    • Click Test to check whether the entered IP address or domain name is reachable.

    [Example]

    192.168.0.101

    www.test.com

    Standby Server Address 2

    IP address or domain name of standby NIS server 2.

    NOTE:
    • Ensure that the IP address or domain name is reachable. Otherwise, user authentication commands and network commands will time out.
    • Click Test to check whether the entered IP address or domain name is reachable.

    [Example]

    192.168.0.102

    www.test.com

    NOTE:

    Contact the domain server administrator to obtain NIS domain environment information.

  5. Click Save.

    NOTE:

    You can click Restore to Initial to initialize the NIS domain authentication settings.

Follow-up Procedure
  • A storage system is added to an NIS domain and an LDAP domain and an NFS share is added to the network groups of the two domains. When the NIS domain fails, mounting the NFS share on clients in the LDAP domain may time out.
  • If an NIS domain fails after a storage system is added to the NIS domain and a client in a non-NIS domain fails to mount an NFS share using NFSv4.0 or NFSv4.1 for the first time, mount the NFS share again on the client.

(Optional) Configuring the NFSv4 Service for a Non-Domain Environment

This section describes how to configure the NFSv4 service for a non-domain environment.

Background

According to the NFSv4 standard protocol, the NFSv4 service can be used only in a domain environment to ensure proper running. To use the NFSv4 service in a non-domain environment, configure the user name@domain name mapping mechanism used by the NFSv4 service on your client. Then, the NFSv4 service will use UIDs and GIDs to transfer owner and group information about files during service transactions between your storage system and client.

Risks
  • In scenarios where the NFSv4 service is used in a non-domain environment, the user authentication method of the NFSv4 service is the same as that of the NFSv3 service. The method cannot meet the theoretical security requirements of the NFSv4 standard protocol.
  • Users mapped by each client depend on the configuration files of client users and user groups. The configuration file of each user and user group must be independently maintained for proper mapping.
  • UIDs and GIDs must be used when ACLs are configured for non-root users and non-root user groups. Otherwise, the configuration will fail.
  • The NFSv4 service is not recommended in a non-domain environment.
Configuration on Clients
  1. Run the echo 1 > /sys/module/nfs/parameters/nfs4_disable_idmapping command.
  2. Run the cat /sys/module/nfs/parameters/nfs4_disable_idmapping command.

    If Y is displayed in the command output, the NFSv4 service is successfully configured.

    If you have used the NFSv4 service to mount NFS shares before configuring the NFSv4 service for a non-domain environment, mount the NFS shares again after configuring the NFSv4 service.

Creating an NFS Share

This section describes how to create an NFS share. After an NFS share is created, the applicable shared file system is accessible to clients that run the OS such as SUSE, Red Hat, HP-UNIX, Sun Solaris, IBM AIX, and Mac OS.

Prerequisites
  • Related data has been obtained.
  • Logical ports have been created.
  • The NFS service has been enabled.
Procedure
  1. Log in to DeviceManager.
  2. Choose Provisioning > Share > NFS (Linux/UNIX/MAC).
  3. Click Create.

    The Create NFS Share dialog box is displayed.

  4. Set the NFS share parameters.

    Table 3-14 describes the related parameters.

    NOTE:

    GUIs may vary with product versions and models. The actual GUIs prevail.

    Table 3-14 Parameters for creating an NFS share

    Parameter

    Description

    Value

    File System

    File system for which you want to create an NFS share.

    NOTE:

    When global root directory / is selected for File System, you can create an NFS GNS share.

    • You must add an independent share for the file system. After the share is added, this file system will not be displayed if a host is only authorized to access / but not the file system.
    • GNS root directory / is read-only. You cannot create, modify, delete directories or files under /, or modify directory attributes of /. Permission will change to the share permission of a file system once the directory of the file system is entered.
    • If GNS is not created, root directory / cannot be mounted to NFSv3. Only shared file systems can be viewed when NFSv4 is mounted with root directory /.

    [Example]

    FileSystem001

    Directory

    Directory or subdirectory under the file system root directory.

    [Example]

    Share01

    Share Path

    The share path of a file system consists of File System and Directory.

    NOTE:

    The default share path is / when you create a GNS.

    [Example]

    /Filesystem001/Share01

    Share Name

    Name used by a user for accessing the shared resources.

    NOTE:

    Share Name cannot be set for creating GNS.

    [Value range]

    The share name:

    • Contains only letters, digits, spaces, and special characters including !"#$%&'()*+-.,:;<=>?@[\]^`{_|}~. On the CLI, some characters need to be entered as escape characters. For example, \| indicates |, || indicates \, \q indicates ?, and \s indicates spaces.
    • Must start with a slash (/).
    • Contains 1 to 255 characters without a slash (/).

    Description

    Description of the created NFS share.

    [Value range]

    The description can be left blank or contain up to 255 characters.

    [Example]

    Share for user 1.

    Character Encoding

    Clients communicate with the storage system using codes. Codes configured on the NFS share should be the same as that of the client. These codes apply to names and metadata of shared files, but do not change the codes of file data. Codes include:

    • UTF-8

      International code set

    • EUC-JP

      euc-j*[ ja ] code set

    • JIS

      JIS code set

    • S-JIS

      cp932*[ ja_jp.932 ] code set

    • ZH

      Simplified Chinese code set, in compliance with GB2312

    • GBK

      Simplified Chinese code set, in compliance with GB2312

    • EUC-TW

      Traditional Chinese code set, in compliance with CNS11643

    • BIG5

      cp950 traditional Chinese code set

    • DE

      German character set, in compliance with ISO8859-1

    • PT

      Portuguese character set, in compliance with ISO8859-1

    • ES

      Spanish character set, in compliance with ISO8859-1

    • FR

      French character set, in compliance with ISO8859-1

    • IT

      Italian character set, in compliance with ISO8859-1

    • KO

      cp949 Korean code set

    NOTE:
    • The storage system automatically lists codes supported by the file system.
    • One way of querying character encoding on clients (for example, in Linux) is to run the locale command.
    • Character Encoding cannot be set when you create a GNS.

    [Default value]

    UTF-8

    Audit Log

    After this function is enabled, the system can record audit logs of the shared directory. The audit log items include Open, Create, Read, Write, Close, Delete, Rename, Obtain properties, Set properties, Obtain security properties and Set security properties. After this function is enabled, by default, the system records Create, Write, Delete, and Rename operations of the shared directory.

    NOTE:

    This function is not supported when you create a GNS.

    [Default value]

    Disabled

  5. Click Next.

    The Set Permissions page is displayed.

  6. Optional: Set permissions for the NFS share.

    1. Select a client that you want to set NFS share in Client List.

      Click Add to create a client if there is no one in the client list. For details, refer to Adding an NFS Share Client.

      NOTE:

      Permission information cannot be set for a GNS share.

    2. Click Next.

  7. Confirm that you want to create the NFS share.

    1. Confirm your settings of the NFS share to be created, and click Finish.

      The Execution Result dialog box is displayed indicating that the operation succeeded.

    2. Click Close.

Adding an NFS Client

An NFS client enables client users to access shared file systems over a network.

Prerequisites
  • Related data has been obtained.
  • A host name must be available on the DNS if you need to add a Host client.
  • A network group name must be available on the LDAP or NIS server if you need to add a Network Group client.
Context

When the global root directory / is selected for File System, you cannot add an NFS client.

Procedure
  1. Log in to DeviceManager.
  2. Choose Provisioning > Share > NFS (Linux/UNIX/MAC).
  3. Select the NFS share for which you want to add a client.
  4. In the Client Information area, click Add.

    The Add Client dialog box is displayed.

    NOTE:

    GUIs may vary with product versions and models. The actual GUIs prevail.

  5. Configure the client properties.

    Table 3-15 describes the related parameters.

    Table 3-15 NFS client parameters

    Parameter

    Description

    Value

    Type

    Type of the NFS client to be added. Types include:

    • Host

      Applicable to clients in a non-domain environment.

    • Network group

      Applicable to clients in an LDAP or NIS domain.

    NOTE:

    When a client is included in multiple share permissions, the priority of share authentication in descending order is as follows: host name > IP address > IP network segment > wildcard > network group > * (anonymous).

    [Default value]

    Host

    Name or IP Address

    Name or service IP address of the NFS client.

    NOTE:

    This parameter is available only when Type is Host.

    [Value range]

    You can enter multiple names or IP addresses of clients, separated by semicolons, spaces, or carriage returns. This parameter can contain a maximum of 25,600 characters.

    The name:

    • Contains 1 to 255 characters, including letters, digits, hyphens (-), periods (.), and underscores (_).
    • Can begin with only a digit or letter and cannot end with a hyphen (-) or an underscore (_).
    • Cannot contain consecutive periods (.), all digits, or a period before or after an underscore or hyphen, for example, "_.", "._", ".-", or "-.".

    For IP addresses:

    • You can enter a single IP address or an IP address segment, or use the asterisk (*) to represent IP addresses of all clients.
    • You can enter IPv4 addresses, IPv6 addresses, or both of them.
    • The mask of IPv4 addresses ranges from 1 to 32. The prefix of IPv6 addresses ranges from 1 to 128.

    [Example]

    192.168.0.10

    192.168.0.10;192.168.1.0/24

    Network Group Name

    Network group name in the LDAP or NIS domain.

    NOTE:

    This parameter is available only when Type is Network group.

    [Value range]

    The name:

    • Contains 1 to 254 characters.
    • Contains only letters, digits, underscores (_), periods (.), and hyphens (-).

    [Example]

    a123456

    Permission

    Permission for a client to access the NFS share. Permissions include:

    • Read-only

      The client can only read files in the NFS share.

    • Read-write

      The client can perform any operations on files in the NFS share.

    [Default value]

    Read-only

    Write Mode

    Write mode of the NFS client. Modes include:

    • Synchronous

      Data written to the NFS share is written into disks immediately.

    • Asynchronous

      Data written to the NFS share is written into the cache first and then into disks.

    NOTE:

    If a client mounts an NFS share in asynchronous mode, data may be lost when the client and storage system are faulty at the same time.

    [Default value]

    Synchronous

    Permission Constraint

    Determines whether to retain the user identity (UID) and group ID (GID) of a shared directory.

    • all_squash

      The UID and GID of a shared directory are mapped to user nobody and are applicable to public directories.

    • no_all_squash

      The UID and GID of a shared directory are reserved.

    [Default value]

    no_all_squash

    Root Permission Constraint

    Controls the root permission of a client.

    • root_squash

      A client cannot access the storage system as user root. If a client accesses the storage system as user root, the client will be mapped as user nobody.

    • no_root_squash

      A client can access the storage system as user root. User root can fully manage and access the shared directory.

    NOTE:

    If you want to create VMs in an NFS share, Root Permission Constraint must be set to no_root_squash. Otherwise, VMs may not run properly.

    [Default value]

    root_squash

    Source Port Verification

    Determines whether to enable source port verification.

    • secure

      Clients can use ports 1 to 1023 to access NFS shares.

    • insecure

      Clients can use any port to access NFS shares.

    [Default value]

    insecure

    Anonymous User ID

    Sets the UID and GID of the user accessing the shared directory who is mapped as an anonymous user.

    NOTE:

    This parameter applies to V300R006C30 and later versions.

    [Value range]

    0 to 4294967294

    [Default value]

    65534

  6. Click OK.

    The Execution Result dialog box is displayed, indicating that the operation succeeded.

  7. Click Close.

Accessing an NFS Share

This section describes how to use a client to access an NFS share. A client accesses an NFS share in an LDAP/NIS domain or a non-domain environment in the same way.

Accessing an NFS Share with a SUSE, Red Hat, or Ubuntu Client
  1. Log in to the client as user root.
  2. Run the showmount -e ipaddress command to view available NFS shares in the storage system.

    ipaddress represents the logical IP address of the storage system. 192.168.50.16 is used as an example.

    #showmount -e 192.168.50.16 
    Export list for 192.168.50.16 
    /nfstest * 
    #     
    NOTE:
    • /nfstest in the output is the Share Name of the NFS share created in the storage system. If a GNS is created, / will be displayed.
    • If SmartMulti-Tenant is configured for a storage system and service IP addresses are IPv6 addresses, you must log in to DeviceManager to query NFS shares in the storage system instead of running the showmount -e ipaddress command.

  3. Run the mount -t nfs -o vers=n,proto=m,rsize=o,wsize=p,hard,intr,timeo=q ipaddress:sharename /mnt command to mount an NFS share to the client.

    Table 3-16 describes the related parameters.

    sharename represents the Share Name of the NFS share created in the storage system.

    #mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600 192.168.50.16:/nfstest /mnt
    NOTE:
    • To mount a GNS, run the following command:
      #mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600 192.168.50.16:/ /mnt
    • If the client uses NFSv4.1 to mount an NFS share, you are advised to specify the minorversion parameter. For a SUSE client, run the following command (commands for other operating systems are similar):
      mount -t nfs -o vers=4,minorversion=1 192.168.50.16:/nfsshare /mnt/localdir
    Table 3-16 Parameters for mounting an NFS share to a SUSE, Red Hat, or Ubuntu client

    Parameter

    Description

    Example

    o

    Option for mounting an NFS share, including ro and rw.

    • ro: mounts a share that is read-only.
    • rw: mounts a share that can be read and written.

    The default value is rw.

    vers

    NFS version. The value can be 3 or 4.

    In an environment that requires high reliability, you are advised to use NFSv3.

    proto

    Transfer protocol. The value can be tcp or udp.

    tcp

    rsize

    Number of bytes for reading files from an NFS server.

    16384 is recommended for Red Hat 7. 1048576 is recommended for the other SUSE and Red Hat versions.

    wsize

    Number of bytes for writing files to an NFS server.

    1048576 is recommended.

    timeo

    Retransmission interval upon timeout. The unit is one tenth of a second.

    600 is recommended.

  4. Run the mount command to verify that the NFS share has been mounted to the local computer.

    #mount 
    192.168.50.16:/nfstest on /mnt type nfs (rw,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600,addr=192.168.50.16)     
    NOTE:

    If a GNS is mounted, the following information is displayed:

    #mount
    192.168.50.16:/ on /mnt type nfs (rw,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600,addr=192.168.50.16)

    When the preceding information is displayed, the NFS share has been successfully mounted to the local computer.

Accessing an NFS Share with a Debian Client
  1. Log in to the client as user root.
  2. On the client, run the apt-get install nfs-common command to install the nfs-common software package.
  3. Run the showmount -e ipaddress command to view available NFS shares in the storage system.

    ipaddress represents the logical IP address of the storage system. 192.168.50.16 is used as an example.

    #showmount -e 192.168.50.16 
    Export list for 192.168.50.16 
    /nfstest * 
    #     
    NOTE:
    • /nfstest in the output is the Share Name of the NFS share created in the storage system. If a GNS is created, / is displayed.
    • If SmartMulti-Tenant is configured for a storage system and service IP addresses are IPv6 addresses, you must log in to DeviceManager to query NFS shares in the storage system instead of running the showmount -e ipaddress command.

  4. Run the mkdir /mnt/share command to create a directory on the client to mount an NFS share.

    The following uses the /share directory as an example.

  5. Run the mount ipaddress:sharename /mnt/share command to mount an NFS share.

    sharename represents the Share Name of the NFS share created in the storage system.

    mount 192.168.50.16:/nfstest /mnt/share
    NOTE:

    To mount a GNS, run the following command:

    mount 192.168.50.16:/ /mnt/share

  6. Run the df -hT command to verify that the NFS share has been successfully mounted to the local computer.
Accessing an NFS Share with an HP-UX or SUN Solaris Client
  1. Log in to the client as user root.
  2. Run the showmount -e ipaddress command to view available NFS shares in the storage system.

    ipaddress represents the logical IP address of the storage system. 192.168.50.16 is used as an example.

    #showmount -e 192.168.50.16 
    Export list for 192.168.50.16 
    /nfstest * 
    #     
    NOTE:
    • /nfstest in the output is the Share Name of the NFS share created in the storage system. If a GNS is created, / is displayed.
    • If SmartMulti-Tenant is configured for a storage system and service IP addresses are IPv6 addresses, you must log in to DeviceManager to query NFS shares in the storage system instead of running the showmount -e ipaddress command.

  3. Run the mount [-F nfs|-f nfs] -o vers=n,proto=m ipaddress:sharename /mnt command to mount an NFS share.

    Table 3-17 describes the related parameters.

    sharename represents the Share Name of the NFS share created in the storage system.

    #mount -f nfs -o vers=3,proto=tcp 192.168.50.16:/nfstest /mnt
    NOTE:

    To mount a GNS, run the following command:

    #mount -f nfs -o vers=3,proto=tcp 192.168.50.16:/ /mnt
    Table 3-17 Parameters for mounting an NFS share to an HP-UX or a SUN Solaris client

    Parameter

    Description

    Example

    -F nfs or -f nfs

    Optional.

    -F nfs is available to an HP-UX client and -f nfs is available to a Solaris client.

    vers

    NFS version. The value can be 3 or 4.

    In an environment that requires high reliability, you are advised to use NFSv3.

    proto

    Transfer protocol. The value can be tcp or udp.

    tcp

  4. Run the mount command to verify that the NFS share has been mounted to the local computer.

    #mount 
    192.168.50.16:/nfstest on /mnt type nfs (rw,vers=3,proto=tcp,addr=192.168.50.16)     
    NOTE:

    If a GNS is mounted, the following information is displayed:

    #mount
    192.168.50.16:/ on /mnt type nfs (rw,vers=3,proto=tcp,addr=192.168.50.16)

    When the preceding information is displayed, the NFS share has been successfully mounted to the local computer.

Accessing an NFS Share with an IBM AIX Client
  1. Log in to the client as user root.
  2. Run showmount -e ipaddress to view available NFS shares in the storage system.

    ipaddress represents the logical IP address of the storage system. 192.168.50.16 is used as an example.

    #showmount -e 192.168.50.16 
    Export list for 192.168.50.16 
    /nfstest * 
    #     
    NOTE:
    • /nfstest in the output is the Share Name of the NFS share created in the storage system. If a GNS is created, / is displayed.
    • If SmartMulti-Tenant is configured for a storage system and service IP addresses are IPv6 addresses, you must log in to DeviceManager to query NFS shares in the storage system instead of running the showmount -e ipaddress command.

  3. Run the mount ipaddress:sharename /mnt command to mount an NFS share.

    sharename represents the Share Name of the NFS share created in the storage system.

    #mount 192.168.50.16:/nfstest /mnt 
    mount: 1831-008 giving up on: 
    192.168.50.16:/nfstest  
    Vmount: Operation not permitted. 
    #     
    NOTE:

    To mount a GNS, run the following command:

    #mount 192.168.50.16:/ /mnt
    mount: 1831-008 giving up on:
    192.168.50.16:/
    Vmount: Operation not permitted.
    #
    NOTE:

    If the default NFS port on an AIX client is different from that on the storage system, the preceding command cannot be executed and a message is displayed indicating that the operation permission is restricted. The NFS share fails to be mounted. In this case, run the following command to solve this problem.

    #nfso -o nfs_use_reserved_ports=1
    Setting nfs_use_reserved_ports to 1

  4. Run the mount command to verify that the NFS share has been mounted to the local computer.

    #mount 
    192.168.50.16:/nfstest on /mnt type nfs (rw,addr=192.168.50.16)     
    NOTE:

    If a GNS is mounted, the following information is displayed:

    #mount
    192.168.50.16:/ on /mnt type nfs (rw,addr=192.168.50.16)

    When the preceding information is displayed, the NFS share has been successfully mounted to the local computer.

Accessing an NFS Share with a Mac OS Client
  1. Run the showmount -e ipaddress command to view available NFS shares in the storage system.

    ipaddress represents the logical IP address of the storage system. 192.168.50.16 is used as an example.

    Volumes root# showmount -e 192.168.50.16 
    /nfstest *     
    NOTE:
    • /nfstest in the output is the Share Name of the NFS share created in the storage system. If a GNS is created, / is displayed.
    • If SmartMulti-Tenant is configured for a storage system and service IP addresses are IPv6 addresses, you must log in to DeviceManager to query NFS shares in the storage system instead of running the showmount -e ipaddress command.

  2. Run the sudo /sbin/mount_nfs -P ipaddress:sharename /Volumes/mount1 command to mount an NFS share.

    sharename represents the Share Name of the NFS share created in the storage system.

    Volumes root# sudo /sbin/mount_nfs -P 192.168.50.16:/nfstest /Volumes/mount1
    NOTE:

    To mount a GNS, run the following command:

    Volumes root# sudo /sbin/mount_nfs -P 192.168.50.16:/ /Volumes/mount1

  3. Run the mount command to verify that the NFS share has been mounted to the local computer.

    Volumes root# mount 
    /dev/disk0s2 on / (hfs, local, journaled) 
    devfs on /dev (devfs, local) 
    fdesc on /dev (fdesc, union) 
    map -hosts on /net (autofs, automounted) 
    map auto_home on /home (autofs, automounted) 
    192.168.50.16:/nfstest on /Volumes/mount1 (nfs)     
    NOTE:

    If a GNS is mounted, the following information is displayed:

    Volumes root# mount
    /dev/disk0s2 on / (hfs, local, journaled)
    devfs on /dev (devfs, local)
    fdesc on /dev (fdesc, union)
    map -hosts on /net (autofs, automounted)
    map auto_home on /home (autofs, automounted)
    192.168.50.16:/ on /Volumes/mount1 (nfs)

    When the preceding information is displayed, the NFS share has been successfully mounted to the local computer.

Accessing an NFS Share with a VMware Client
NOTE:

When you want to create VMs on an NFS share, Root Permission Constraint of the NFS share must be no_root_squash.

  1. Log in to VMware vSphere Client.
  2. Select the desired host from the left navigation tree.
  3. Choose Configuration > Storage > Add Storage.

    The Add Storage wizard is displayed.

    NOTE:

    GUIs may vary with versions. The actual GUIs prevail.

  4. In Select Storage Type, select Network File System and click Next.

    The Locate Network File System page is displayed.

  5. Set the related parameters.

    Table 3-18 describes related parameters.
    Table 3-18 Parameters for adding an NFS share in VMware

    Parameter

    Description

    Value

    Server

    Logical IP address of the storage system.

    Example

    192.168.50.16

    Folder

    Share Name of the NFS share created in the storage system.

    Example

    /nfstest

    NOTE:

    Since the GNS root directory / only has read permissions, GNSs do not apply to VMware.

    Datastore Name

    Name of the NFS share in VMware.

    Example

    data

  6. Click Next.
  7. Confirm the information and click Finish.
  8. On the Configuration tab page, view the newly added NFS share.
Follow-up Procedure

If you modify NFS user information, new user authentication information takes effect after 30 minutes.

NFS Share Configuration Example

This section uses an example to explain how to configure an NFS share.

Scenario

A research institute has an enterprise office system (physical servers) and a database system (VM). Specific storage space must be allocated to different service systems. The following describes the customer's legacy environment and requirements.

Network Diagram

Figure 3-4 shows the customer's network diagram.

Figure 3-4 Customer's network diagram

The status quo of the customer's live network can be concluded as follows:

  • The enterprise office system runs Linux and Mac OS, and is connected to an LDAP domain server.
  • Linux hosts belong to network group ldapgrouplinux. Mac OS hosts belong to network group ldapgroupmac.
  • The enterprise office system, LDAP domain server, VMware server, and storage system reside on the same LAN.
Customer Requirements

The research institute wants to purchase a storage system for the enterprise office system and VM system. The storage space must be allocated as follows:

  • Each of the network groups for Mac OS hosts, Linux hosts, and VMs requires 1 TB dedicated readable and writable storage space.
  • The Mac OS network group and Linux network group can only access their own storage space.
Requirement Analysis

This section provides an analysis of the customer's requirements and a solution.

Customer requirement analysis:

  • Clients use Linux and Mac OS operating systems, so the storage system can employ NFS shares to provide storage space for the two systems respectively.
  • The storage system supports NFS share management in a non-domain and an LDAP domain environment.

Solution:

  • Use an OceanStor storage system.
  • Configure each service system as shown in Table 3-19.
Table 3-19 Basic information of service systems

Service System

Share Path

Shared Space

Network Group

IP Address

Linux hosts in the office system

/FileSystem0000

1 TB

ldapgrouplinux

N/A

Mac OS hosts in the office system

/FileSystem0001

1 TB

ldapgroupmac

N/A

VM

/FileSystem0002

1 TB

N/A

192.168.50.20

LDAP server

N/A

N/A

N/A

192.168.50.21

Configuration Process

Figure 3-5 shows the configuration process, helping you understand the subsequent configuration.

Figure 3-5 Configuration process
NOTE:

This configuration process is only applicable to this configuration example. For the complete configuration process of NFS shares, see Configuration Process.

Configuration Procedure

This section describes how to configure NFS shares on DeviceManager.

Prerequisites

The storage system and application servers can properly communicate with each other.

Procedure
  1. Add the storage system to an LDAP Domain.

    1. Log in to DeviceManager and choose Settings > Storage Settings > File Storage Service > Domain Authentication.
    2. Set the LDAP domain parameters in the LDAP Domain Settings area.

  2. Create file systems.

    1. On DeviceManager, choose Provisioning > File System.

      The File System page is displayed.

    2. Click Create.

      The Create File System dialog box is displayed.

    3. In the Create File System dialog box, configure parameters as planned.
      Table 3-20 describes related parameters.
      Table 3-20 Parameters for creating file systems

      Parameter

      Planned Value

      Name

      FileSystem

      Capacity

      1 TB

      File System Block Size

      8 KB

      Quantity

      3

      Owning Storage Pool

      StoragePool000

      NOTE:
      • When multiple file systems are created, the storage system automatically adds a number to each file system name for distinction. In this example, the created file systems are named FileSystem0000, FileSystem0001, and FileSystem0002.
      • In this example, a VM and a database are used. Therefore, the file system block size can be set to 8 KB.
    4. Click OK.

  3. Create an NFS share and set the access permission.

    The following describes how to share FileSystem0000 to Linux hosts as an example.
    1. On DeviceManager, choose Provisioning > Share > NFS (Linux/UNIX/MAC), and click Create.

      The Create NFS Share Wizard: Step 1 of 4 page is displayed.

    2. In File System, select FileSystem0000.
    3. Click Next.

      The Create NFS Share Wizard: Step 2 of 4 page is displayed.

    4. Click Add.

      The Add Client dialog box is displayed.

    5. Set Type to Network Group and enter ldapgrouplinux in Network Group Name. Select Read-write from Permission.
    6. Click OK.

      The Create NFS Share Wizard: Step 2 of 4 page is displayed.

    7. Click Next.

      The Create NFS Share Wizard: Step 3 of 4 page is displayed.

    8. Click Finish.

      The Create NFS Share Wizard: Step 4 of 4 page is displayed.

    9. Click Close.

  4. Repeat 3 to share FileSystem0001 and FileSystem0002 respectively to Mac OS and VMware hosts.

    NOTE:

    To share FileSystem0001 to Mac OS hosts, enter ldapgroupmac in Network Group Name in 3.e.

    To share FileSystem0002 to the VMware ESX host, enter 192.168.50.20 in Name or IP Address in 3.e.

Accessing Shared Space

This section describes how to access shared space. After configuring an NFS share, you must mount it to a client.

Procedure
  1. Mount an NFS share to an LDAP client that belongs to network group ldapgrouplinux in the LDAP domain.

    1. Run the mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600 192.168.50.16:/FileSystem0000 /mnt command on the Linux client to mount the NFS share.
      linux-client:~ #mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600 192.168.50.16:/FileSystem0000 /mnt     

      linux-client indicates the name of the Linux client. timeo indicates the timeout time for retransmission (unit: 1/10 seconds, recommended value: 600). 192.168.50.16 indicates the IP address of the logical port on the storage system. /FileSystem0000 indicates the name of the NFS share to be mounted. /mnt indicates the mount point.

    2. Run the mount command to view the mounted share.
      linux-client:~ # mount 
      192.168.50.16:/FileSystem0000 on /mnt type nfs (ro,addr=192.168.50.16)

      The command output indicates that the NFS share has been successfully mounted to the Linux client that belongs to network group ldapgrouplinux.

  2. Mount an NFS share to an LDAP client that belongs to network group ldapgroupmac in the LDAP domain.

    1. Run the sudo /sbin/mount_nfs -P 192.168.50.16:/FileSystem0001 /volumes/mnt command on the Mac OS client to mount the NFS share.
      Volumes root# sudo /sbin/mount_nfs -P 192.168.50.16:/FileSystem0001 /volumes/mnt     

      Volumes root indicates the name of the Mac OS client that belongs to network group ldapgroupmac. 192.168.50.16 indicates the IP address of the logical port on the storage system. /FileSystem0001 indicates the NFS shared file system to be mounted. /volumes/mnt indicates the mount point.

    2. Run the mount command to view the mounted share.
      Volumes root# mount 
      /dev/disk0s2 on / (hfs, local, journaled)  
      devfs on /dev (devfs, local)  
      fdesc on /dev (fdesc, union)  
      map -hosts on /net (autofs, automounted)  
      map auto_home on /home (autofs, automounted)  
      192.168.50.16:/FileSystem0001 on /Volumes/mnt (nfs)      

      The command output indicates that the NFS share has been successfully mounted to the Mac OS client that belongs to network group ldapgroupmac.

  3. Mount an NFS share in VMware.

    1. Log in to VMware vSphere Client.
    2. Select the desired host from the left navigation tree.
    3. Choose Configuration > Storage > Add Storage.

      The Add Storage wizard is displayed.

    4. In Select Storage Type, select Network File System. Then, click Next.

      The Locate Network File System page is displayed.

    5. Set parameters.
      Table 3-21 describes related parameters.
      Table 3-21 Parameters for adding an NFS share in VMware

      Parameter

      Value

      Server

      192.168.50.16

      Folder

      /FileSystem0002

      Datastore Name

      data

    6. Click Next.
    7. Confirm the information and click Finish.
    8. On the Configuration tab page, view the newly added NFS share.

NFS GNS Share Configuration Example

This section uses an example to explain how to configure an NFS GNS share.

Scenario

Administrators of an enterprise need to manage all NFS shared file systems in a centralized manner. A GNS share must be configured for the administrators.

Network Diagram

Figure 3-6 shows the customer's network diagram.

Figure 3-6 Customer's network diagram

The status quo of the customer's live network can be concluded as follows:

  • Both enterprise office system and administrator system run Linux. They are both connected to an LDAP domain server.
  • Linux hosts in the enterprise office system belong to three network groups in the LDAP domain, ldapgroup1, ldapgroup2, and ldapgroup3. The administrator host belongs to network group ldapmgr in the LDAP domain.
  • The enterprise office system, administrator's host, LDAP domain server, and storage system reside on the same LAN.
Customer Requirements

The storage system purchased by the enterprise is used for the office system. The storage space must be allocated as follows:

  • In the enterprise office system, there are three network groups (Linux network groups). Each network group needs 1 TB dedicated readable and writable storage space.
  • The administrators need to manage all storage space.
Requirement Analysis

This section provides an analysis of the customer's requirements and a solution.

Customer requirement analysis:

  • All clients run the Linux operating system, so the storage system can employ NFS shares to provide storage space for clients.
  • The administrator host runs a Linux operating system, so an NFS GNS share can be used to enable the administrator to manage all NFS shared file systems directly.
  • The storage system supports NFS share management in a non-domain and an LDAP domain environment.

Solution:

  • Use an OceanStor storage system.
  • Configure each service system as shown in Table 3-22.
Table 3-22 Basic information of service systems

Service System

Share Path

Shared Space

Network Group

IP Address

Linux hosts in the office system

/FileSystem0000

1 TB

ldapgroup1

N/A

Linux hosts in the office system

/FileSystem0001

1 TB

ldapgroup2

N/A

Linux hosts in the office system

/FileSystem0002

1 TB

ldapgroup3

N/A

Administrator's Linux host

All shared file systems

N/A

ldapmgr

N/A

LDAP server

N/A

N/A

N/A

192.168.50.21

Configuration Process

Figure 3-7 shows the configuration process, helping you understand the subsequent configuration.

Figure 3-7 Configuration process
NOTE:

This configuration process is only applicable to this configuration example. For the complete configuration process of NFS shares, see Configuration Process.

Configuration Procedure

This section describes how to configure NFS GNS shares on DeviceManager.

Prerequisites

The storage system and application servers can properly communicate with each other.

Procedure
  1. Add the storage system to an LDAP Domain.

    1. Log in to DeviceManager and choose Settings > Storage Settings > File Storage Service > Domain Authentication.
    2. Set the LDAP domain parameters in the LDAP Domain Settings area.

  2. Create file systems.

    1. On DeviceManager, choose Provisioning > File System.

      The File System page is displayed.

    2. Click Create.

      The Create File System dialog box is displayed.

    3. In the Create File System dialog box, configure parameters as planned.
      Table 3-23 describes the related parameters.
      Table 3-23 Parameters for creating file systems

      Parameter

      Planned Value

      Name

      FileSystem

      Capacity

      1 TB

      File System Block Size

      32 KB

      Quantity

      3

      Owning Storage Pool

      StoragePool000

      NOTE:
      • When multiple file systems are created, the storage system automatically adds a number to each file system name for distinction. In this example, the created file systems are named FileSystem0000, FileSystem0001, and FileSystem0002.
      • Assume that the size of most files in the file system is between 100 KB and 1 MB. The file system block size can be set to 32 KB.
    4. Click OK.

  3. Create an NFS share and set the access permission.

    The following uses how to share FileSystem0000 with hosts in network group ldapgroup1 as an example.
    1. On DeviceManager, choose Provisioning > Share > NFS (Linux/UNIX/MAC), and click Create.

      The Create NFS Share Wizard: Step 1 of 4 page is displayed.

    2. In File System, select file system FileSystem0000.
    3. Click Next.

      The Create NFS Share Wizard: Step 2 of 4 page is displayed.

    4. Click Add.

      The Add Client dialog box is displayed.

    5. Set Type to Network Group and enter ldapgroup1 in Network Group Name.
    6. Click OK.

      The Create NFS Share Wizard: Step 2 of 4 page is displayed.

    7. Click Next.

      The Create NFS Share Wizard: Step 3 of 4 page is displayed.

    8. Click Finish.

      The Create NFS Share Wizard: Step 4 of 4 page is displayed.

    9. Click Close.

  4. Repeat 3 to share FileSystem0001 and FileSystem0002 to hosts in network groups ldapgourp2 and ldapgourp3 respectively.

    NOTE:

    To share FileSystem0001 to hosts in network group ldapgourp2, enter ldapgroup2 for Network Group Name in 3.e.

    To share FileSystem0002 to hosts in network group ldapgourp3, enter ldapgroup3 for Network Group Name in 3.e.

  5. Repeat 3 to share FileSystem0000, FileSystem0001, and FileSystem0002 to network group ldapmgr.
  6. Create an NFS GNS share.

    1. On DeviceManager, choose Provisioning > Share > NFS (Linux/UNIX/MAC), and click Create.

      The Create NFS Share Wizard: Step 1 of 4 dialog box is displayed.

    2. Click / next to File System.
    3. Click Next.

      The Create NFS Share Wizard: Step 2 of 4 dialog box is displayed.

    4. Click Next.

      The Create NFS Share Wizard: Step 3 of 4 dialog box is displayed.

    5. Click Finish.

      The Create NFS Share Wizard: Step 4 of 4 dialog box is displayed.

    6. Click Close.

Accessing Shared Space

This section describes how to access shared space. After configuring an NFS share, you must mount it to a client.

Procedure
  1. Mount an NFS share to an LDAP client that belongs to network group ldapgroup1 in the LDAP domain.

    1. Run the mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600 192.168.50.16:/FileSystem0000 /mnt command on the Linux client to mount the NFS share.
      linux-client:~ #mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600 192.168.50.16:/FileSystem0000 /mnt     

      linux-client indicates the name of the Linux client. timeo indicates the timeout time for retransmission (unit: 1/10 seconds, recommended value: 600). 192.168.50.16 indicates the IP address of the logical port on the storage system. /FileSystem0000 indicates the NFS share name to be mounted. /mnt indicates the mount point.

    2. Run the mount command to view the mounted share.
      linux-client:~ # mount 
      192.168.50.16:/FileSystem0000 on /mnt type nfs (ro,addr=192.168.50.16)

      The command output indicates that the NFS share has been successfully mounted to the Linux client that belongs to network group ldapgroup1.

  2. Repeat 1 to mount the NFS share to an LDAP client that belongs network groups ldapgroup2 and ldapgroup3 in the LDAP domain.
  3. Mount the NFS GNS share to the administrator's host that belongs to the network group ldapmgr in the LDAP domain.

    1. Run the mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600 192.168.50.16:/ /mnt command on the administrator's Linux client.
      linux-mgr:~ # mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,timeo=600 192.168.50.16:/ /mnt     

      linux-mgr indicates the name of the Linux client. timeo indicates the timeout time for retransmission (unit: 1/10 seconds, recommended value: 600). 192.168.50.16 indicates the IP address of the logical port on the storage system. / indicates the NFS GNS share in the storage system. /mnt indicates the mount point.

    2. Run the mount command to check the mounted share.
      linux-mgr:~ # mount 192.168.50.16:/ on /mnt type nfs (ro,addr=192.168.50.16)     
      The command output shows that the NFS GNS share has been mounted to the administrator's Linux client. If you access the mounted directory, you can access and view all NFS shared file systems.
      linux-mgr:~ # ll /mnt 
      total 12 
      drwxrwxrwx 3 root root 3 Jan  9 20:08 FileSystem0000 
      drwxrwxrwx 3 root root 3 Jan  9 20:08 FileSystem0001 
      drwxrwxrwx 3 root root 3 Jan  9 20:08 FileSystem0002     

Translation
Download
Updated: 2019-07-12

Document ID: EDOC1100021203

Views: 43424

Downloads: 68

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next