Tuesday, 30 April 2019

Configure SFTP server


Create a group for collaborative users.
[root@fileserver-01 ~]# groupadd -g 1501 dev
Create 3 collaborative users with supplementary group of dev and login shell as /sbin/nologin to restrict shell access by the user.
[root@fileserver-01 ~]# useradd -u 1001 -G dev –s /sbin/nologin ahmer
[root@fileserver-01 ~]# useradd -u 1002 -G dev –s /sbin/nologin mansoor
[root@fileserver-01 ~]# useradd -u 1003 -G dev –s /sbin/nologin danish
Set the home directories of these users as /common.
[root@fileserver-01 ~]# usermod -d /common ahmer
[root@fileserver-01 ~]# usermod -d /common mansoor
[root@fileserver-01 ~]# usermod -d /common danish
Set passwords for the users.
[root@fileserver-01 ~]# echo 123 | passwd ahmer --stdin
Changing password for user ahmer.
passwd: all authentication tokens updated successfully.
[root@fileserver-01 ~]# echo 123 | passwd mansoor --stdin
Changing password for user mansoor.
passwd: all authentication tokens updated successfully.
[root@fileserver-01 ~]# echo 123 | passwd danish --stdin
Changing password for user danish.
passwd: all authentication tokens updated successfully.
[root@fileserver-01 ~]#
Create a directory for collaboration and adjust permissions on it according to the requirement.
[root@fileserver-01 ~]# mkdir -p /chroot/sftp
[root@fileserver-01 ~]# chmod 555 /chroot/sftp
[root@fileserver-01 ~]# mkdir /chroot/sftp/common/
[root@fileserver-01 ~]# chgrp dev /chroot/sftp/common/
[root@fileserver-01 ~]# chmod 2775 /chroot/sftp/common/
Configure sshd service to handle the collaborative users.
[root@fileserver-01 ~]# vi /etc/ssh/sshd_config
Search and Comment the following line.
#Subsystem       sftp    /usr/libexec/openssh/sftp-server
Add following lines at the end of the /etc/ssh/sshd_config.
Subsystem       sftp    internal-sftp

Match Group dev
 X11Forwarding no
 AllowTCPForwarding no
 ChrootDirectory /chroot/sftp/
 ForceCommand internal-sftp –u 007
We have set the user mask as 007 to restrict the other users from accessing our files.

Monday, 1 April 2019

Subscription add in Redhat linux

  • SSH / Get console of the shell of the desired Red Hat system you want to subscribe with the subscription
  • run subscription-manager register command andsubscription-manager attach --pool=<POOL_ID>, the pool id can be found using subscription-manager list --available, this pool-ID is the unique ID respective to your subscription
  • Internet connectivity to the system is required as the system will get authentication of the subscription from access.redhat.com
  • verify the subscription is current/invalid from subscription-manager status command, If the output says  " Current " means the system is registered with proper subscription, anything other than this will come as an output saying "Invalid "
  • The subscription can be attached to any Red Hat system, where the environment is started from RHEL 5.7 to the latest (7.5) one
  • If the system had any previously attached subscription, please run these two commands one-by-one subscription-manager clean ; subscription-manager refresh at the system to clear out the previous metadata
  • If the system had any previously added repository manually (not through Red Hat subscription) please run these command rm -fr /etc/yum.repos.d/* to delete any Repository configuration file which're not attached through a valid Red Hat subscription.
  • You can also see the system attached to the subscription from your purchase account at access.redhat.com where the subscription will be shown attached to which system, the Hostname/ UUID.  
  •  
  •  

    Resolution

    If you continue to see applicable errata displayed in RHSM, it can mean one of a couple of things that need to be addressed:
  • The system has not checked in recently, and there is a discrepancy between what you see in the Customer Portal and what is actually installed on your system. In this case, you may want to check which errata are available on your system and force a check in and run yum update again:
# yum update
# rm -f /var/lib/rhsm/packages/packages.json
# service rhsmcertd stop 
# rhsmcertd --now
Note: After forcing your system to check in again, please wait up to four hours for the errata data on RHSM to update to their correct data.

Tuesday, 19 February 2019

How to rescan and recover LUN paths in a host after modifying SLM reporting nodes

Microsoft Windows hosts:
  • Rescan after add-reporting-nodes and remove-reporting-nodes using Windows GUI.
    1. Open Computer Management (Local)
    2. In the console tree, click Computer Management (Local) >> Storage >> Disk Management
    3. In the disk management page click Action >> Rescan Disks. This will rescan all the disks and update any path changes.
  • Rescan after add-reporting-nodes and remove-reporting-nodes using command line.
    1. Open Command Prompt and enter the following text:
      # diskpart
    2. At the DISKPART> prompt, enter the following text:
      DISKPART> rescan.
      This will rescan all the disks and updates any path changes. For more information, see Microsoft TechNet Updatedisk.
    3.  
      Linux hosts:
    4. Rescan after add-reporting-nodes.
      1. Starting RHEL 6.5 & RHEL 7.0 onwards, run the following command to update active/optimized paths afteradd-reporting-nodes:
        # /usr/bin/rescan-scsi-bus.sh –a
      2. For RHEL 5 and RHEL 6.4 (including previous updates), run the following command to update active/optimized paths after add-reporting-nodes:
        # /usr/bin/rescan-scsi-bus.sh
        Note: Nothing additional has to be done in the multipath layer.
    5. Rescan after remove-reporting-nodes
      1. Separate rescan steps are required for SCSI layer and Multipathing layer in Linux storage stack to clean up stale disk paths after remove-reporting-nodes in SLM.
      2. Run the following command to remove stale LUN paths in SCSI layer
        # /usr/bin/rescan-scsi-bus.sh –r
      3. Next run the following command to remove stale LUN paths in multipath layer:
        # multipath -r
        Solaris hosts:
      4. Rescan after add-reporting-nodes
        1. For iSCSI LUNs, run the following command:
          # devfsadm -i iscsi
        2. For FC/FCoE LUNs, perform the following steps:
          1. Run the following command to identify OS Device name of the HBA ports that are accessing NetApp LUNs:
            # cfgadm -al -o show_FCP_dev | grep fc-fabric
            c3 fc-fabric connected configured unknown
            c4 fc-fabric connected configured unknown
          2. Now run the following command for each <controller> to be rescanned:
            # cfgadm -c configure <controller>
            For example from Step1 c3 & c4 are the controller names and so the command would be:
            # cfgadm -c configure c3
            # cfgadm -c configure c4
      5. Rescan after remove-reporting-nodes
        1. For iSCSI LUNs, run the following command:
          # devfsadm -i iscsi
          # devfsadm -Cv
        2. For FC/FCoE LUNs, perform the following steps:
          1. If the host is accessing NetApp LUNs using a single FC port, then it is advised to reboot the host. Run the following commands to reconfigure and reboot the host.
            # touch /reconfigure
            # init 6
          2. But if host is accessing NetApp LUNs with 2 or more FC ports, then run the following commands to identify OS Device names of the HBA ports:
            # cfgadm -al -o show_FCP_dev | grep fc-fabric
            c3 fc-fabric connected configured unknown
            c4 fc-fabric connected configured unknown
          3. Run the following command to reconfigure each port one after the other:
            # cfgadm -c unconfigure <controller>
            # cfgadm -c configure <controller>

            For example from above output c3 & c4 are the controller names and so the commands would be similar to the following:
            # cfgadm -c unconfigure c3
            # cfgadm -c configure c3
            # cfgadm -c unconfigure c4
            # cfgadm -c configure c4

            Note: Above step should be peformed only for one port at a time.
          4. Run the following command to clean up the devices:
            # devfsadm -Cv
          5. To clear MPxIO entries, an OS reboot is needed and this can be performed during a planned downtime. Run the following command to reconfigure and reboot the host:
            # touch /reconfigure
            # init 6
          6. Once the host is back after reboot, run the following command :
            # devfsadm -Cv
      6.  

Monday, 11 February 2019

HBA Debugging Fiber

mpathadm list LU

$ fcinfo hba-port -l  2100000e1eca86e1

$ fcinfo hba-port -l  2100000e1eca86e0

Sunday, 10 February 2019

SSH Passwordless Login Using SSH Keygen in 5 Easy Steps

SSH Client : 10.11.1.x
SSH Remote Host : 10.88.1.x
 

Step 1: Create Authentication SSH-Kegen Keys on –10.11.1.x

$ ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/home/tecmint/.ssh/id_rsa): [Press enter key]
Created directory '/home/tecmint/.ssh'.
Enter passphrase (empty for no passphrase): [Press enter key]
Enter same passphrase again: [Press enter key]
Your identification has been saved in /home/tecmint/.ssh/id_rsa.
Your public key has been saved in /home/tecmint/.ssh/id_rsa.pub.
The key fingerprint is:
5f:ad:40:00:8a:d1:9b:99:b3:b0:f8:08:99:c3:ed:d3 tecmint@tecmint.com
The key's randomart image is:
+--[ RSA 2048]----+
|        ..oooE.++|
|         o. o.o  |
|          ..   . |
|         o  . . o|
|        S .  . + |
|       . .    . o|
|      . o o    ..|
|       + +       |
|        +.       |
+-----------------+
 

Step 2: Create .ssh Directory on – 10.88.1.x

ssh oracle@10.88.1.X mkdir -p .ssh

The authenticity of host '10.88.1.X (10.88.1.X)' can't be established.
RSA key fingerprint is 45:0e:28:11:d6:81:62:16:04:3f:db:38:02:la:22:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.88.1.X' (ECDSA) to the list of known hosts.
oracle@10.88.1.X's password: [Enter Your Password Here]
 

Step 3: Upload Generated Public Keys to –10.88.1.x

$ cat .ssh/id_rsa.pub | ssh oracle@10.88.1.X 'cat >> .ssh/authorized_keys'

oracle@10.88.1.X's password: [Enter Your Password Here]

Step 4: Set Permissions on –10.88.1.x

 

ssh oracle@10.88.1.X "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"

oracle@10.88.1.X's password: [Enter Your Password Here]
 
 

Step 5: Login from 10.11.1.X to 10.88.1.x Server without Password

 ssh oracle@10.88.1.X
 

 

 

 

 

 

Wednesday, 6 February 2019

How to disable cleartext authentication mechanisms in the AMQP configuration.


Solutions:
To disable PLAIN authentication method can be done in two methods
1.By removing cyrus-sasl-plain package because for each mechanism corresponding package is required.

#yum remove cyrus-sasl-plain
#/etc/init.d/qpidd restart

Or

2. SASL is a framework that supports a variety of authentication methods like CRAM-MD5, DIGEST-MD5, or GSSAPI. Edit qpidd file and append this mech_list parameter and provide authentication method as below.

#vi etc/sasl2/qpidd.conf
mech_list: DIGEST-MD5

#/etc/init.d/qpidd restart