Thursday 27 December 2018

report service kill command

kill -9 $(ps -ef | grep [N]odeManager | awk '{print $2}')

kill -9 $(ps -ef | grep oracle | grep report | awk '{print $2}')​

Monday 17 December 2018

NFS volume mount in solaris 11

svcs -a |grep nfs
 
On NFS server
# share -F nfs -o rw=[accesslist] /path/to/share
# share -F nfs -o rw=client1,client2 /export
# share -F nfs -o rw=@client1:/export /export
# showmount -e 
 
On NFS client
# mount -F nfs -o [options] [NFS_server]:[mountpoint]
 
  
  # mount -F nfs  10.11.1.xxx:/BARMAN /BARMAN
  # mount -F nfs -o rsize=32768,wsize=32768  10.11.1.xxx:/BARMAN /BARMAN
 
 
 
NFS volume mount in RedHat linux 7.6
 
 
For Database server:

# groupadd nfs
# usermod -a -G nfs nfsnobody
# chmod 0770 /nfs
# chgrp nfs /nfs
 
vi /etc/exports
/nfs box1(rw,sync,sec=krb5,anongid=1004) 
/u01 * (rw,sync,all_squash,anonuid=54321,anongid=54321) 

# exportfs -arv

service nfs start/stop/restart/status


local directory: /hrm/PDF

For Application:
 mount -t nfs 10.11.1.XXX:/hrm/PDF /u01/orbhrm/pdf/​
mount -t nfs -O user=oracle,pass=oracle,file_mode=0777,dir_mode=0777,uid=oracle,gid=oinstall 10.11.1.XXX:/u05/test  /u05/test

local Directory: /u01/orbhrm/pdf

entry /etc/fstab
server:/usr/local/pub    /pub   nfs    defaults 0 0

10.11.1.xxx:/hrm/PDF          /u01/orbhrm/pdf   nfs   defaults  0 0
 

Tuesday 4 December 2018

Raid config for Sun T7-1 server

RAID Volumes:

-> set /HOST/bootmode script="setenv auto-boot? false"

 

ok show-devs
...
/pci@301/pci@2/scsi@0/disk@p0

 

ok devalias
...
scsi0                    /pci@301/pci@2/scsi@0
scsi                     /pci@301/pci@2/scsi@0
...
ok select scsi
ok show-volumes  
ok show-volumes
Volume 0 Target 389  Type RAID1 (Mirroring)
  WWID 03b2999bca4dc677
  Optimal  Enabled  Inactive 
  2 Members                    583983104 Blocks, 298 GB
  Disk 1 
    Primary  Optimal 
    Target 9      HITACHI  H103030SCSUN300G A2A8
  Disk 0 
    Secondary  Optimal 
    Target c      HITACHI  H103030SCSUN300G A2A8
ok inactive_volume activate-volume
ok 0 activate-volume
Volume 0 is now activated
ok unselect-dev
ok probe-scsi-all
/pci@301/pci@2/scsi@0
 
FCode Version 1.00.54, MPT Version 2.00, Firmware Version 5.00.17.00
 
Target a 
  Unit 0   Removable Read Only device   TEAC    DV-W28SS-R      1.0C                    
  SATA device  PhyNum 3 
Target b 
GB  Unit 0   Disk   SEAGATE  ST914603SSUN146G 0868    286739329 Blocks, 146 
  SASDeviceName 5000c50016f75e4f  SASAddress 5000c50016f75e4d  PhyNum 1 
Target 389 Volume 0 
  Unit 0   Disk   LSI      Logical Volume   3000    583983104 Blocks, 298 GB
  VolumeDeviceName 33b2999bca4dc677  VolumeWWID 03b2999bca4dc677
 
/pci@300/pci@2/usb@0/hub@3/storage@1/disk@0
  Unit 0   Removable Read Only device    AMI     Virtual CDROM   1.00
ok setenv auto-boot? true
To create a RAID volume, type:
# raidconfig create raid options -d disks
For example, to create a RAID 0 volume with a stripe size of 128 Kb and read-ahead caching enabled on controller 1, type the following command:
# raidconfig create raid --stripe-size=128 --read-cache=enabled -d c1d0,c1d1
    To delete a RAID volume, type:
# raidconfig delete raid option
For example:
  • To delete RAID volume 1 created on controller 1, type:
    # raidconfig delete raid -r c1r1
  • To delete all RAID volumes, type:
    # raidconfig delete raid --all

Wednesday 21 November 2018

Scaning FC_LUN's in RedHat linux

# fdisk -l 2> /dev/null | egrep '^Disk' | egrep -v 'dm-' |wc -l

# ls /sys/class/fc_host/
host1  host8

For FC
#  echo "1" > /sys/class/fc_host/host1/issue_lip
#  echo "1" > /sys/class/fc_host/host8/issue_lip

For SCSI

#  echo "1" > /sys/class/scsi_host/host1/scan
#  echo "1" > /sys/class/scsi_host/host8/scan


# fdisk -l 2> /dev/null | egrep '^Disk' | egrep -v 'dm-' |wc -l

Monday 12 November 2018

Switch port conflict

How to solve switch port conflict for Brocade 

1. Portdisable (ISL ports) 
2. Cfgdisable 
3. Cfgclear 
4. Cfgsave 
5. Defzone –allaccess 
6. Portenable (isl ports)

Tuesday 6 November 2018

multipath

multipath –ll
multipathd –k
multipathd> show config
multipathd> show maps

Tuesday 30 October 2018

Configuring Multipathing for Linux


Configuring Multipathing
The procedure in this section demonstrates how to set up a simple multipath configuration.
To configure multipathing on a server with access to SAN-attached storage:
1.     Install the device-mapper-multipath package:
# yum install device-mapper-multipath
2.     You can now choose one of two configuration paths:
§  To set up a basic standby failover configuration without editing the /etc/multipath.conf configuration file, enter the following command:
# mpathconf --enable --with_multipathd y
This command also starts the multipathd service and configures the service to start after system reboots.
Skip the remaining steps of this procedure.
§  To edit /etc/multipath.conf and set up a more complex configuration such as active/active, follow the remaining steps in this procedure.
                Initialize the /etc/multipath.conf file:
# mpathconf --enable
                Edit /etc/multipath.conf and define defaults, blacklist, blacklist_exceptions, multipaths, and devices sections as required, for example:
  defaults {
      udev_dir              /dev
      polling_interval      10
      path_selector         "round-robin 0"
      path_grouping_policy  multibus
      getuid_callout        "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
      prio                  alua
      path_checker          readsector0
      rr_min_io             100
      max_fds               8192
      rr_weight             priorities
      failback              immediate
      no_path_retry         fail
      user_friendly_names   yes
  }
   
  blacklist {
      # Blacklist by WWID
      wwid "*"
   
      # Blacklist by device name
      devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
   
      # Blacklist by device type
      device {
        vendor    "COMPAQ  "
        product   "HSV110 (C)COMPAQ"
      }
  }
   
  blacklist_exceptions {
      wwid "3600508b4000156d700012000000b0000"
      wwid "360000970000292602744533032443941"
  }
   
  multipaths {
      multipath {
          wwid                  3600508b4000156d700012000000b0000
          alias                 blue
          path_grouping_policy  multibus
          path_checker          readsector0
          path_selector         "round-robin 0"
          failback              manual
          rr_weight             priorities
          no_path_retry         5
      }
      multipath {
          wwid                  360000970000292602744533032443941
          alias                 green
      }
  }
   
  devices {
      device {
          vendor                "SUN"
          product               "(StorEdge 3510|T4"
          path_grouping_policy  multibus
          getuid_callout        "/sbin/scsi_id --whitelisted --device=/dev/%n"
          path_selector         "round-robin 0"
          features              "0"
          hardware_handler      "0"
          path_checker          directio
          prio                  const
          rr_weight             uniform
          rr_min_io             1000
      }
}
The sections have the following purposes:
defaults
Defines default multipath settings, which can be overridden by settings in the devices section, and which in turn can be overridden by settings in themultipaths section.
blacklist
Defines devices that are excluded from multipath topology discovery. Blacklisted devices cannot subsumed by a multipath device.
The example shows the three ways that you can use to exclude devices: by WWID (wwid), by device name (devnode), and by device type (device).
blacklist_exceptions
Defines devices that are included in multipath topology discovery, even if the devices are implicitly or explicitly listed in the blacklist section.
multipaths
Defines settings for a multipath device that is identified by its WWID.
The alias attribute specifies the name of the multipath device as it will appear in /dev/mapper instead of a name based on either the WWID or the multipath group number.
To obtain the WWID of a SCSI device, use the scsi_id command:
# scsi_id --whitelisted --replace-whitespace --device=device_name
devices
Defines settings for individual types of storage controller. Each controller type is identified by the vendor, product, and optional revision settings, which must match the information in sysfs for the device.
You can find details of the storage arrays that DM-Multipath supports and their default configuration values in /usr/share/doc/device-mapper-multipath-version/multipath.conf.defaults, which you can use as the basis for entries in /etc/multipath.conf.
To add a storage device that DM-Multipath does not list as being supported, obtain the vendor, product, and revision information from the vendor,model, and rev files under /sys/block/device_name/device.
The following entries in /etc/multipath.conf would be appropriate for setting up active/passive multipathing to an iSCSI LUN with the specified WWID.
defaults {
    user_friendly_names    yes
    getuid_callout         "/bin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n”
}

multipaths {
    multipath {
        wwid 360000970000292602744533030303730
    }
}
In this standby failover configuration, I/O continues through a remaining active network interface if a network interfaces fails on the iSCSI initiator.
For more information about configuring entries in /etc/multipath.conf, refer to the multipath.conf(5) manual page.
                Start the multipathd service and configure the service to start after system reboots:
  # systemctl start multipathd
# systemctl enable multipathd
Multipath devices are identified in /dev/mapper by their World Wide Identifier (WWID), which is globally unique. Alternatively, if you set the value ofuser_friendly_names to yes in the defaults section of /etc/multipath.conf or by specifying the --user_friendly_names n option to mpathconf, the device is named mpathN where N is the multipath group number. An alias attribute in the multipaths section of /etc/multipath.conf specifies the name of the multipath device instead of a name based on either the WWID or the multipath group number.
You can use the multipath device in /dev/mapper to reference the storage in the same way as you would any other physical storage device. For example, you can configure it as an LVM physical volume, file system, swap partition, Automatic Storage Management (ASM) disk, or raw device.
To display the status of DM-Multipath, use the mpathconf command, for example:
# mpathconf
multipath is enabled
find_multipaths is enabled
user_friendly_names is enabled
dm_multipath modules is loaded
multipathd is running
To display the current multipath configuration, specify the -ll option to the multipath command, for example:
# multipath -ll
mpath1(360000970000292602744533030303730) dm-0 SUN,(StorEdge 3510|T4
size=20G features=‘0’ hwhandler=‘0’ wp=rw
|-+- policy=‘round-robin 0’ prio=1 status=active
| ‘- 5:0:0:2 sdb 8:16    active ready running
‘-+- policy=‘round-robin 0’ prio=1 status=active
  ‘- 5:0:0:3 sdc 8:32    active ready running
In this example, /dev/mapper/mpath1 subsumes two paths (/dev/sdb and /dev/sdc) to 20 GB of storage in an active/active configuration using round-robin I/O path selection. The WWID that identifies the storage is 360000970000292602744533030303730 and the name of the multipath device under sysfs is dm-0.
If you edit /etc/multipath.conf, restart the multipathd service to make it re-read the file:
# systemctl restart multipathd
For more information, see the mpathconf(8), multipath(8), multipathd(8), multipath.conf(5), and scsi_id(8) manual pages.