No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

eSight V300R010C00 Maintenance Guide 07

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Common Commands for HA System

Common Commands for HA System

The common commands and their functions for the HA system are described.

Overview of Commands

The software for Veritas includes VxVM, VVR, VCS, and GCO. The common commands are classified into status query commands and maintenance commands.The software for the HA system (Sun Cluster) includes VxVM and Sun Cluster. The common commands are classified into status query commands and maintenance commands.

Generic Naming Rules of Veritas Commands
  • The commands of VxVM usually start with vx.
  • The commands of VVR usually start with vr.
  • The commands of VCS usually start with ha.
Generic Formats of Veritas Commands
  • Format of query commands of VxVM: Command list
  • Format of VCS commands (hares and hagrp): Command -action resource/resource group -sys host name
NOTE:

The actions often include online, offline, and clear.

Query methods of Veritas Command Help
  • Command -H
  • man Command
Directories of Saving Veritas Commands
  • /opt/VRTS/bin
  • /usr/bin
  • /usr/sbin

Checking the Status of the Volume, RLink and RVG

The command is used to query the status of the volume, RLink, and RVG during preventative maintenance inspections and fault maintenance.

Syntax
  • View the volume status: vxprint -v
  • View the RVG status:
    • vxprint -V
    • vxprint -l datarvg
  • View the RLink status:
    • vxprint -P
    • vxprint -l datarlk
Screen Output Format
TY NAME         ASSOC        KSTATE LENGTH PLOFFS STATE    TUTIL0  PUTIL0
Table 9-4 Screen output format description of vxprint

Screen Output Format

Description

TY

Type. In general, dg indicates the disk group, dm indicates the disk, v indicates the volume, rl indicates the RLink, and rv indicates RVG. pl and sd can be neglected.

NAME

Name. It indicates names of the volume, RVG, and RLink.

ASSOC

Association. For the volume, if it is attached to an RVG, the RVG name is displayed; otherwise, gen is displayed. For the RLink, if it is attached to an RVG, the RVG name is displayed; otherwise, - is displayed. For an srl_vol volume, if it is attached to an RVG, the RVG name is displayed; otherwise, "fsgen" is displayed.

KSTATE

Normally, it is EENABLED for the volume, CONNECT for the RLink, and ENABLED for the RVG.

STATE

Normally, it is ACTIVE for the volume, RLink, and RVG.

Table 9-5 Screen output description of vxprint -l datarlk

Field Name

Description

Disk group

The disk group to which RLINK belongs.

Rlink

RLINK name.

info

Some information about RLINK. timeout indicates timeout time, and packet_size indicates packet length.

state

The state of RLINK. Normally, it is ACTIVE, synchronous state is off, latencyprot state is off and srlprot state is autodcm.

assoc

Association information of RLINK.

  • rvg refers to the RVG to which this RLINK belongs
  • remote_host refers to remote host name
  • IP_addr refers to the IP address of the remote host
  • port refers to port No. of the remote host
  • remote_dg refers to remote disk group remote_dg_dgid refers to remote disk group ID
  • remote_rvg_version refers to remote host's RVG version No.
  • remote_rlink refers to the name of remote host's RLINK
  • remote_rlink_rid refers to remote host's RLINK ID
  • local_host refers to local host's name
  • IP_addr refers to local host's IP address
  • port refers to port No. of local host

protocol

Data synchronization protocol.

flags

The flag for RLINK, which should be write enabled attached consistent connected asynchronous in the normal state.

Table 9-6 Screen output description of vxprint -l datarvg

Field Name

Description

Disk group

The disk group to which this RVG belongs.

Rvg

RVG name.

info

The information about RVG.

state

The state of RVG. Normally, it should be ACTIVE and kernel should be ENABLED.

assoc

The association information of RVG.

datavols refers to data disk volumes contained in RVG, srl refers to the SRLog disk volume contained in RVG and rlinks refers to the Rlink contained in RVG.

att

The Rlink activated by RVG.

flags

The flag information of RVG, which should be closed primary enabled attached normally.

device

The device information of RVG, containing device ID and trail.

perms

The authority information of RVG.

Checking the Disks Managed by the Veritas

This command is used to check whether the disks managed by the Veritas are normal during daily maintenance.

Syntax

# vxdisk list

Screen Output Format
DEVICE       TYPE            DISK         GROUP        STATUS
Table 9-7 Screen output format description of vxdisk

Screen Output Format

Description

DEVICE

Equipment number. It is usually c*t*d*, which indicates a hard disk.

TYPE

Type. It is usually auto:sliced.

DISK

Disk name.

GROUP

Disk group name.

STATUS

Normally, it is online.

Checking the Disk Groups Managed by the Veritas

This command is used to check whether the disk groups managed by the Veritas are normal during daily maintenance.

Syntax

# vxdg list

Screen Output Format
NAME         STATE           ID
Table 9-8 Screen output format description of vxdisk

Screen Output Format

Description

NAME

Disk group name. It is datadg in the case of two hard disks, and rootdg in the case of at least three hard disks.

STATE

enabled.

ID

Disk group ID, which can be neglected.

Checking the Replication Status of the HA System

This command is used to check the replication status of the HA system during preventative maintenance inspections and fault maintenance. Users can perform related operations based on the current status.

Syntax
  • # vradmin printrvg RVG name
  • # vradmin -g datadg repstatus RVG name
Screen Output Format
  • The screen output of vradmin printrvg datarvg is as follows:
    Replicated Data Set: datarvg  
    Primary:  
            HostName: 129.9.1.1 <localhost>  
            RvgName: datarvg  
            DgName: datadg  
    Secondary:  
            HostName: 129.9.1.2  
            RvgName: datarvg  
            DgName: datadg     
    Table 9-9 Screen output format description

    Screen Output Format

    Description

    Example

    Replicated Data Set

    RVG name.

    It is datarvg in this example.

    Primary

    Active server, which is the data replication source.

    -

    HostName: IP address <localhost>

    IP address of the local server.

    It is 129.9.1.1 in this example.

    RvgName

    RVG name.

    It is datarvg in this example.

    DgName

    Disk group that the RVG belongs to.

    It is datadg in this example.

    Secondary

    Standby server, which is the data replication sink.

    -

    HostName: IP address

    IP address of the remote server.

    It is 129.9.1.2 in this example.

    RvgName

    RVG name.

    It is datarvg in this example.

    DgName

    Disk group that the RVG belongs to.

    It is datadg in this example.

  • The screen output of vradmin -g datadg repstatus datarvg is as follows:
    Replicated Data Set: datarvg 
    Primary: 
      Host name:                  129.9.1.1 
      RVG name:                   datarvg 
      DG name:                    datadg 
      RVG state:                  enabled for I/O 
      Data volumes:               1 
      VSets:                      0 
      SRL name:                   srl_vol 
      SRL size:                   1.00 G 
      Total secondaries:          1 
     
    Secondary: 
      Host name:                  129.9.1.2 
      RVG name:                   datarvg 
      DG name:                    datadg 
      Data status:                consistent, up-to-date 
      Replication status:         replicating (connected) 
      Current mode:               asynchronous 
      Logging to:                 SRL 
      Timestamp Information:      behind by 0h 0m 0s
    Table 9-10 Screen output format description

    Screen Output Format

    Description

    Example

    Replicated Data Set

    RVG name.

    It is datarvg in this example.

    Primary

    Active server

    -

    Host name

    IP address of the active server.

    It is 129.9.1.1 in this example.

    RVG name

    RVG name of the active server.

    It is datarvg in this example.

    DG name

    Disk group that the RVG belongs to.

    It is datadg in this example.

    RVG state

    RVG status. Normally, the status is enabled for I/O.

    It is enabled for I/O in this example.

    Data volumes

    Disk volumes to be replicated.

    It is 1 in this example.

    SRL name

    SRL name.

    It is srl_vol in this example.

    SRL size

    SRL size, which is usually 1G.

    It is 1G in this example.

    Total secondaries

    Standby server count, which is usually 1.

    It is 1 in this example.

    Secondary

    Standby server.

    -

    Host name

    IP address of the standby server.

    It is 129.9.1.1 in this example.

    RVG name

    RVG name of the standby server.

    It is datarvg in this example.

    DG name

    Disk group that the RVG belongs to.

    It is datadg in this example.

    Data status

    Data status.
    • If the active server is synchronous with the standby server, the status is consistent, up-to-date.
    • Otherwise, the status is inconsistent (the number of bytes to be synchronized).

    It is consistent, up-to-date in this example.

    Replication status

    Replication status. Normally, the status is replicating(connected).

    It is replicating(connected) in this example.

    Current mode

    Replication mode, which is usually asynchronous.

    It is asynchronous in this example.

    Logging to

    Buffer area, which is usually SRL. In the case of SRL overflow, it is DCM.

    It is SRL in this example.

    Timestamp Information

    Time stamp. If the data is consistent between the active and standby servers, it is N/A. Otherwise, the time for synchronization is specified.

    It is N/A in this example.

Checking the Service Group or Resource Status of the VCS

This command is used to check the service group or resource status of the VCS during preventative maintenance inspections and fault maintenance.

Syntax
  • View the status of each service group in the VCS: # hastatus -sum
  • View the status of each resource in the VCS: # hastatus
NOTE:

To terminate the hastatus command, press the shortcut keys Ctrl+C.

Screen Output Format
Table 9-11 Screen output format description of hastatus -sum

Screen Output Format

Description

A primary RUNNING 0

The VCS running status of the current node. Normally, it is RUNNING.

B AppService Primary Y N ONLINE

The name of the application group of the primary node is AppService, and the status is ONLINE.

B ClusterService Primary Y N ONLINE

The name of the heartbeat group is ClusterService, and the status is ONLINE.

B VVRService Primary Y N ONLINE

The name of the data replication group is VVRService, and the status is ONLINE.

L Icmp SecondaryCluster ALIVE

The heartbeat status between the primary and secondary nodes. Normally, it is ALIVE.

M SecondaryCluster RUNNING

The VCS running status of the remote node. Normally, it is RUNNING.

N secondaryCluster:secondary RUNNING 0

The running status of the secondary node. Normally, it is RUNNING.

O AppService SecondaryCluster:Secondary Y N OFFLINE

The application group of the secondary node. The status is OFFLINE.

Table 9-12 Screen output format description of hastatus

Screen Output Format

Description

SecondaryCluster RUNNING

The running status of the remote node. Normally, it is RUNNING.

HB:Icmp SecondaryCluster ALIVE

The heartbeat status of the remote node. Normally, it is ALIVE.

SecondaryCluster:Secondary RUNNING

The server running status of the remote server. Normally, it is RUNNING.

AppService localclus:Primary ONLINE

The running status of the local application group. Normally, the status on the active server is ONLINE and the status on the standby server is OFFLINE.

ClusterService localclus:Primary ONLINE

The running status of the local heartbeat group. Normally, it is ONLINE.

VVRService localclus:Primary ONLINE

The running status of the local data replication group. Normally, it is ONLINE.

AppService SecondaryCluster:Secondary OFFLINE

The running status of the remote application group. Normally, the status on the active server is ONLINE and the status on the standby server is OFFLINE.

EMSApp SecondaryCluster:Secondary OFFLINE

The running status of a single local resource. Normally, the application group status of the active server is ONLINE, the application group status of the standby server is OFFLINE, and the status of resources in other resource groups is ONLINE.

Controlling VCS Resource Groups

The commands are used to control the VCS resource groups.

Syntax
  • # hagrp -online resource group name -sys host name
  • # hagrp -offline resource group name -sys host name
  • # hagrp -freeze resource group name -sys host name
  • # hagrp -unfreeze resource group name -sys host name
  • # hagrp -clear resource group name -sys host name
Example

The following examples follow the sequence of the preceding displayed commands.

Example

Prerequisite

Execution Result

Remarks

# hagrp -online AppService -sys Primary

  • All the groups that the resource group depends on are online.
  • The resource group is not frozen.

The eSight server is started.

If you perform the online operation the first time, the -force parameter is required. For example: hagrp -online -force AppService -sys Primary

# hagrp -offline AppService -sys Primary

  • All the groups that depend on the resource group are offline.
  • The resource group is not frozen.

The eSight server is shut down.

Shut down the eSight server on the primary server.

# hagrp -freeze AppService -sys Primary

-

The resource group is frozen. The VCS function is disabled.

-

# hagrp -unfreeze AppService -sys Primary

-

The resource group is unlocked. The VCS function is enabled.

Unfreeze the AppService group on the primary server.

# hagrp -clear AppService -sys Primary

The status of a resource group is FAULT. In this case, a resource is usually faulty.

The error tag of the VCS is cleared. In this way, the online operation can be performed.

-

Ending the VCS Server Forcibly

The command is used to forcibly end the VCS server in the Veritas hot backup scenario.

Syntax

# hastop -all -force

Example

# hastop -all -force

Operation result: The VCS server is forcibly shut down. The status of VCS resources is not offline.

Starting the Maintenance Tool Process

This script starts the maintenance tool process.

Syntax

Linux:start-sysmon-ha.sh

Path

installation directory/mttools/bin

Prerequisites

On Linux, this operation is performed by the ossuser user.

Example
Linux
  1. Log in to the server as the ossuser user.
  2. Run the following command to switch the directory.

    cd installation directory/mttools/bin

  3. Run the following command to start the maintenance tool process.

    ./start-sysmon-ha.sh

    After the command is successfully executed, the following information is displayed:

    starting mttools ..... 
    start mttools done.     

HA Switchover

This command is used to perform an active/standby switchover in a two-node cluster.

Syntax

switchTo.sh Switching mode

Path

/etc/ICMR/OSSApp

Parameter Description

Parameter

Description

Switching Mode

  • local: The local server functions as an active server.
  • remote: The local server functions as a standby server.
NOTE:

When you run the switchover command on the standby server, the local parameter is recommended. When you run the switchover command on the active server, the remote parameter is recommended.

Prerequisites
  • The heartbeat connection status between the active and standby servers is normal.
  • The data replication status between the active and standby servers is normal.
  • The data has been synchronized between the active and standby servers.
  • The active and standby servers are normal, without a fault mark.
Procedure
  1. Run the following command to check the data replication status:

    # vradmin -g <diskgroupname> repstatus <rvgname>

    Command example
    # vradmin -g datadg repstatus datarvg
    After the command is successfully executed, the following information is displayed:
    Replicated Data Set: datarvg Primary: Host name:                  10.71.210.78 RVG name:                   datarvg DG name:                    datadg RVG state:                  enabled for I/O Data volumes:               4 VSets:                      0 SRL name:                   lv_srl SRL size:                   3.00 G Total secondaries:          1 
     
    Secondary: Host name:                  10.71.210.76 RVG name:                   datarvg DG name:                    datadg Data status:                consistent, up-to-date Replication status:         replicating (connected) Current mode:               asynchronous Logging to:                 SRL Timestamp Information:      behind by 0h 0m 0s
    NOTE:

    Only when Data status is consistent, up-to-date, the system can perform an active/standby replication switchover.

  2. Check the server status and resource status. For details, see Checking the Service Group or Resource Status of the VCS.

    NOTE:

    The system can perform an active/standby replication switchover only when the status is as follows: active and standby servers: Running; heartbeat status: ALIVE; application service group (for example: AppService) of the active server: Online; application service group (for example: AppService) of the standby server: Offline.

Example
  1. Log in to the secondary server as the root user.
  2. Run the following command to switch the directory.

    cd /etc/ICMR/OSSApp

  3. Run the following commands to perform an active/standby switchover:

    ./switchTo.sh local

    The command is run successfully when the following message is displayed. The two-node cluster then performs a switchover. To check the switchover result, view the two-node cluster resource status.
    Switch command execute successfully.  
Translation
Download
Updated: 2019-06-30

Document ID: EDOC1100044373

Views: 24866

Downloads: 74

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next