Wednesday 26 February 2020

NFS for solaris 11

NFS Commands

These commands must be run as root to be fully effective, but requests for information can be made by all users:

automount Command

This command installs autofs mount points and associates the information in the automaster files with each mount point. The syntax of the command is as follows:
automount [ -t duration ] [ -v ]
-t duration sets the time, in seconds, that a file system is to remain mounted, and -v selects the verbose mode. Running this command in the verbose mode allows for easier troubleshooting.
If not specifically set, the value for duration is set to 5 minutes. In most circumstances, this value is good. However, on systems that have many automounted file systems, you might need to increase the duration value. In particular, if a server has many users active, checking the automounted file systems every 5 minutes can be inefficient. Checking the autofs file systems every 1800 seconds, which is 30 minutes, could be more optimal. By not unmounting the file systems every 5 minutes, /etc/mnttab can become large. To reduce the output when df checks each entry in /etc/mnttab, you can filter the output from df by using the -F option (see the df(1M) man page) or by using egrep.
You should consider that adjusting the duration also changes how quickly changes to the automounter maps are reflected. Changes cannot be seen until the file system is unmounted. Refer to Modifying the Maps for instructions on how to modify automounter maps.
The same specifications you would make on the command line can be made using the sharectl command. However, unlike the command line options, the SMF repository preserves your specifications, through service restarts, system reboots, as well as system upgrades. These are the parameters that can be set for the automount command.
timeout
Sets the duration for a file system to remain idle before the file system is unmounted. This keyword is the equivalent of the -t argument for the automount command. The default value is 600.
automount_verbose
Provides notification of autofs mounts, unmounts, and other nonessential events. This keyword is the equivalent of the -v argument for automount. The default value is FALSE.

clear_locks Command

This command enables you to remove all file, record, and share locks for an NFS client. You must be root to run this command. From an NFS server, you can clear the locks for a specific client. From an NFS client, you can clear locks for that client on a specific server. The following example would clear the locks for the NFS client that is named tulip on the current system.
# clear_locks tulip
Using the -s option enables you to specify which NFS host to clear the locks from. You must run this option from the NFS client, which created the locks. In this situation, the locks from the client would be removed from the NFS server that is named bee.
# clear_locks -s bee

CautionCaution - This command should only be run when a client crashes and cannot clear its locks. To avoid data corruption problems, do not clear locks for an active client.

fsstat Command

The fsstat utility enables you to monitor file system operations by file system type and by mount point. Various options allow you to customize the output. See the following examples.
This example shows output for NFS version 3, version 4, and the root mount point.
% fsstat nfs3 nfs4 /
  new     name   name    attr    attr   lookup   rddir   read   read   write   write
 file    remov   chng     get     set      ops     ops    ops  bytes     ops   bytes
3.81K       90  3.65K   5.89M   11.9K    35.5M   26.6K   109K   118M   35.0K   8.16G  nfs3
  759      503    457   93.6K   1.44K     454K   8.82K  65.4K   827M     292    223K  nfs4
25.2K    18.1K  1.12K   54.7M    1017     259M   1.76M  22.4M  20.1G   1.43M   3.77G  /
This example uses the -i option to provide statistics about the I/O operations for NFS version 3, version 4, and the root mount point.
% fsstat -i nfs3 nfs4 /
 read    read    write   write   rddir   rddir   rwlock   rwulock
  ops   bytes      ops   bytes     ops   bytes      ops       ops
 109K    118M    35.0K   8.16G   26.6K   4.45M     170K      170K  nfs3
65.4K    827M      292    223K   8.82K   2.62M    74.1K     74.1K  nfs4
22.4M   20.1G    1.43M   3.77G   1.76M   3.29G    25.5M     25.5M  /
This example uses the -n option to provide statistics about the naming operations for NFS version 3, version 4, and the root mount point.
% fsstat -n nfs3 nfs4 /
lookup   creat   remov  link   renam  mkdir  rmdir   rddir  symlnk  rdlnk
 35.5M   3.79K      90     2   3.64K      5      0   26.6K      11   136K  nfs3
  454K     403     503     0     101      0      0   8.82K     356  1.20K  nfs4
  259M   25.2K   18.1K   114    1017     10      2   1.76M      12  8.23M  /
For more information, see the fsstat(1M) man page.

mount Command

With this command, you can attach a named file system, either local or remote, to a specified mount point. For more information, see the mount(1M) man page. Used without arguments, mount displays a list of file systems that are currently mounted on your computer.
Many types of file systems are included in the standard Oracle Solaris installation. Each file-system type has a specific man page that lists the options to mount that are appropriate for that file-system type. The man page for NFS file systems is mount_nfs(1M). For UFS file systems, see mount_ufs(1M).
The Solaris 7 release includes the ability to select a path name to mount from an NFS server by using an NFS URL instead of the standard server:/pathname syntax. See How to Mount an NFS File System Using an NFS URL for further information.

CautionCaution - The version of the mount command does not warn about invalid options. The command silently ignores any options that cannot be interpreted. Ensure that you verify all of the options that were used so that you can prevent unexpected behavior.

mount Options for NFS File Systems

The subsequent text lists some of the options that can follow the -o flag when you are mounting an NFS file system. For a complete list of options, refer to the mount_nfs(1M) man page.
bg|fg
These options can be used to select the retry behavior if a mount fails. The bg option causes the mount attempts to be run in the background. The fg option causes the mount attempt to be run in the foreground. The default is fg, which is the best selection for file systems that must be available. This option prevents further processing until the mount is complete. bg is a good selection for noncritical file systems because the client can do other processing while waiting for the mount request to be completed.
forcedirectio
This option improves performance of large sequential data transfers. Data is copied directly to a user buffer. No caching is performed in the kernel on the client. This option is off by default.
Previously, all write requests were serialized by both the NFS client and the NFS server. The NFS client has been modified to permit an application to issue concurrent writes, as well as concurrent reads and writes, to a single file. You can enable this functionality on the client by using the forcedirectio mount option. When you use this option, you are enabling this functionality for all files within the mounted file system. You could also enable this functionality on a single file on the client by using the directio() interface. Unless this functionality has been enabled, writes to files are serialized. Also, if concurrent writes or concurrent reads and writes are occurring, then POSIX semantics are no longer being supported for that file.
For an example of how to use this option, refer to Using the mount Command.
largefiles
With this option, you can access files that are larger than 2 Gbytes. Whether a large file can be accessed can only be controlled on the server, so this option is silently ignored on NFS version 3 mounts. By default, all UFS file systems are mounted with largefiles. For mounts that use the NFS version 2 protocol, the largefiles option causes the mount to fail with an error.
nolargefiles
This option for UFS mounts guarantees that no large files can exist on the file system. See the mount_ufs(1M) man page. Because the existence of large files can only be controlled on the NFS server, no option for nolargefiles exists when using NFS mounts. Attempts to NFS-mount a file system by using this option are rejected with an error.
nosuid|suid
The nosuid option is the equivalent of specifying the nodevices option with the nosetuid option. When the nodevices option is specified, the opening of device-special files on the mounted file system is disallowed. When the nosetuid option is specified, the setuid bit and setgid bit in binary files that are located in the file system are ignored. The processes run with the privileges of the user who executes the binary file.
The suid option is the equivalent of specifying the devices option with the setuid option. When the devices option is specified, the opening of device-special files on the mounted file system is allowed. When the setuid option is specified, the setuid bit and the setgid bit in binary files that are located in the file system are honored by the kernel.
If neither option is specified, the default option is suid, which provides the default behavior of specifying the devices option with the setuid option.
The following table describes the effect of combining nosuid or suid with devices or nodevices, and setuid or nosetuid. Note that in each combination of options, the most restrictive option determines the behavior.
Behavior From the Combined Options
Option
Option
Option
The equivalent of nosetuid with nodevices
nosuid
nosetuid
nodevices
The equivalent of nosetuid with nodevices
nosuid
nosetuid
devices
The equivalent of nosetuid with nodevices
nosuid
setuid
nodevices
The equivalent of nosetuid with nodevices
nosuid
setuid
devices
The equivalent of nosetuid with nodevices
suid
nosetuid
nodevices
The equivalent of nosetuid with devices
suid
nosetuid
devices
The equivalent of setuid with nodevices
suid
setuid
nodevices
The equivalent of setuid with devices
suid
setuid
devices
The nosuid option provides additional security for NFS clients that access potentially untrusted servers. The mounting of remote file systems with this option reduces the chance of privilege escalation through importing untrusted devices or importing untrusted setuid binary files. All these options are available in all Oracle Solaris file systems.
public
This option forces the use of the public file handle when contacting the NFS server. If the public file handle is supported by the server, the mounting operation is faster because the MOUNT protocol is not used. Also, because the MOUNT protocol is not used, the public option allows mounting to occur through a firewall.
rw|ro
The -rw and -ro options indicate whether a file system is to be mounted read-write or read-only. The default is read-write, which is the appropriate option for remote home directories, mail-spooling directories, or other file systems that need to be changed by users. The read-only option is appropriate for directories that should not be changed by users. For example, shared copies of the man pages should not be writable by users.
sec=mode
You can use this option to specify the authentication mechanism to be used during the mount transaction. The value for mode can be one of the following.
  • Use krb5 for Kerberos version 5 authentication service.
  • Use krb5i for Kerberos version 5 with integrity.
  • Use krb5p for Kerberos version 5 with privacy.
  • Use none for no authentication.
  • Use dh for Diffie-Hellman (DH) authentication.
  • Use sys for standard UNIX authentication.
The modes are also defined in /etc/nfssec.conf.
soft|hard
An NFS file system that is mounted with the soft option returns an error if the server does not respond. The hard option causes the mount to continue to retry until the server responds. The default is hard, which should be used for most file systems. Applications frequently do not check return values from soft-mounted file systems, which can make the application fail or can lead to corrupted files. If the application does check the return values, routing problems and other conditions can still confuse the application or lead to file corruption if the soft option is used. In most situations, the soft option should not be used. If a file system is mounted by using the hard option and becomes unavailable, an application that uses this file system hangs until the file system becomes available.

Using the mount Command

Refer to the following examples.
  • In NFS version 2 or version 3, both of these commands mount an NFS file system from the server bee read-only.
    # mount -F nfs -r bee:/export/share/man /usr/man
    # mount -F nfs -o ro bee:/export/share/man /usr/man
    In NFS version 4, the following command line would accomplish the same mount.
    # mount -F nfs -o vers=4 -r bee:/export/share/man /usr/man
  • In NFS version 2 or version 3, this command uses the -O option to force the man pages from the server bee to be mounted on the local system even if /usr/man has already been mounted. See the following.
    # mount -F nfs -O bee:/export/share/man /usr/man
    In NFS version 4, the following command line would accomplish the same mount.
    # mount -F nfs -o vers=4 -O bee:/export/share/man /usr/man
  • In NFS version 2 or version 3, this command uses client failover.
    # mount -F nfs -r bee,wasp:/export/share/man /usr/man
    In NFS version 4, the following command line uses client failover.
    # mount -F nfs -o vers=4 -r bee,wasp:/export/share/man /usr/man

    Note - When used from the command line, the listed servers must support the same version of the NFS protocol. Do not use both version 2 and version 3 servers when running mount from the command line. You can use both servers with autofs. Autofs automatically selects the best subset of version 2 or version 3 servers.

  • Here is an example of using an NFS URL with the mount command in NFS version 2 or version 3.
    # mount -F nfs nfs://bee//export/share/man /usr/man
    Here is an example of using an NFS URL with the mount command in NFS version 4.
    # mount -F nfs -o vers=4 nfs://bee//export/share/man /usr/man
  • Use the forcedirectio mount option to enable the client to permit concurrent writes, as well as concurrent reads and writes, to a file. Here is an example.
    # mount -F nfs -o forcedirectio bee:/home/somebody /mnt
    In this example, the command mounts an NFS file system from the server bee and enables concurrent reads and writes for each file in the directory /mnt. When support for concurrent reads and writes is enabled, the following occurs.
    • The client permits applications to write to a file in parallel.
    • Caching is disabled on the client. Consequently, data from reads and writes is kept on the server. More explicitly, because the client does not cache the data that is read or written, any data that the application does not already have cached for itself is read from the server. The client's operating system does not have a copy of this data. Normally, the NFS client caches data in the kernel for applications to use.
      Because caching is disabled on the client, the read-ahead and write-behind processes are disabled. A read-ahead process occurs when the kernel anticipates the data that an application might request next. The kernel then starts the process of gathering that data in advance. The kernel's goal is to have the data ready before the application makes a request for the data.
      The client uses the write-behind process to increase write throughput. Instead of immediately starting an I/O operation every time an application writes data to a file, the data is cached in memory. Later, the data is written to the disk.
      Potentially, the write-behind process permits the data to be written in larger chunks or to be written asynchronously from the application. Typically, the result of using larger chunks is increased throughput. Asynchronous writes permit overlap between application processing and I/O processing. Also, asynchronous writes permit the storage subsystem to optimize the I/O by providing a better sequencing of the I/O. Synchronous writes force a sequence of I/O on the storage subsystem that might not be optimal.
    • Significant performance degradation can occur if the application is not prepared to handle the semantics of data that is not being cached. Multithreaded applications avoid this problem.

    Note - If support for concurrent writes is not enabled, all write requests are serialized. When requests are serialized, the following occurs. When a write request is in progress, a second write request has to wait for the first write request to be completed before proceeding.

  • Use the mount command with no arguments to display file systems that are mounted on a client. See the following.
    % mount
    / on /dev/dsk/c0t3d0s0 read/write/setuid on Wed Apr 7 13:20:47 2004
    /usr on /dev/dsk/c0t3d0s6 read/write/setuid on Wed Apr 7 13:20:47 20041995
    /proc on /proc read/write/setuid on Wed Apr 7 13:20:47 2004
    /dev/fd on fd read/write/setuid on Wed Apr 7 13:20:47 2004
    /tmp on swap read/write on Wed Apr 7 13:20:51 2004
    /opt on /dev/dsk/c0t3d0s5 setuid/read/write on Wed Apr 7 13:20:51 20041995
    /home/kathys on bee:/export/home/bee7/kathys              
      intr/noquota/nosuid/remote on Wed Apr 24 13:22:13 2004

umount Command

This command enables you to remove a remote file system that is currently mounted. The umount command supports the -V option to allow for testing. You might also use the -a option to unmount several file systems at one time. If mount-points are included with the -a option, those file systems are unmounted. If no mount points are included, an attempt is made to unmount all file systems that are listed in /etc/mnttab except for the “required” file systems, such as /, /usr, /var, /proc, /dev/fd, and /tmp. Because the file system is already mounted and should have an entry in /etc/mnttab, you do not need to include a flag for the file-system type.
The -f option forces a busy file system to be unmounted. You can use this option to unhang a client that is hung while trying to mount an unmountable file system.

CautionCaution - By forcing an unmount of a file system, you can cause data loss if files are being written to.

See the following examples.
Example 6-1 Unmounting a File System
This example unmounts a file system that is mounted on /usr/man:
# umount /usr/man
Example 6-2 Using Options with umount
This example displays the results of running umount -a -V:
# umount -a -V
umount /home/kathys
umount /opt
umount /home
umount /net
Notice that this command does not actually unmount the file systems.

mountall Command

Use this command to mount all file systems or a specific group of file systems that are listed in a file-system table. The command provides a way of doing the following:
  • Selecting the file-system type to be accessed with the -F FSType option
  • Selecting all the remote file systems that are listed in a file-system table with the -r option
  • Selecting all the local file systems with the -l option
Because all file systems that are labeled as NFS file-system type are remote file systems, some of these options are redundant. For more information, see the mountall(1M) man page.
Note that the following two examples of user input are equivalent:
# mountall -F nfs
# mountall -F nfs -r

umountall Command

Use this command to unmount a group of file systems. The -k option runs the fuser -k mount-point command to kill any processes that are associated with the mount-point. The -s option indicates that unmount is not to be performed in parallel. -l specifies that only local file systems are to be used, and -r specifies that only remote file systems are to be used. The -h host option indicates that all file systems from the named host should be unmounted. You cannot combine the -h option with -l or -r.
The following is an example of unmounting all file systems that are mounted from remote hosts:
# umountall -r
The following is an example of unmounting all file systems that are currently mounted from the server bee:
# umountall -h bee

sharectl Command

This release includes the sharectl utility, which is an administrative tool that enables you to configure and manage file-sharing protocols, such as NFS. You can use this command to do the following:
  • Set client and server operational properties
  • Display property values for a specific protocol
  • Obtain the status of a protocol
The sharectl utility uses the following syntax:
# sharectl subcommand [option] [protocol]
The sharectl utility supports the following subcommands:
Table 6-2 Subcommands for sharectl Utility
Subcommand
Description
set
Defines the properties for a file-sharing protocol. For a list of properties and property values, see the parameters described in the nfs(4) man page.
get
Displays the properties and property values for the specified protocol.
status
Displays whether the specified protocol is enabled or disabled. If no protocol is specified, the status of all file-sharing protocols is displayed.
For more information about the sharectl utility, see the following:

set Subcommand

The set subcommand, which defines the properties for a file-sharing protocol, supports the following options:
-h
Provides an online-help description
-p
Defines a property for the protocol
The set subcommand uses the following syntax:
# sharectl set [-h] [-p property=value] protocol

Note - The following:
  • You must have root privileges to use the set subcommand.
  • You do not need to repeat this command-line syntax for each additional property value. You can use the -p option multiple times to define multiple properties on the same command line.

The following example sets the minimum version of the NFS protocol for the client to 3:
# sharectl set -p nfs_client_versmin=3 nfs

get Subcommand

The get subcommand, which displays the properties and property values for the specified protocol, supports the following options:
-h
Provides an online-help description.
-p
Identifies the property value for the specified property. If the -p option is not used, all property values are displayed.
The get subcommand uses the following syntax:
# sharectl get [-h] [-p property] protocol

Note - You must have root privileges to use the get subcommand.

The following example uses servers, which is the property that enables you to specify the maximum number of concurrent NFS requests:
# sharectl get -p servers nfs
servers=1024
In the following example, because the -p option is not used, all property values are displayed:
# sharectl get nfs
servers=1024
listen_backlog=32
protocol=ALL
servers=32
lockd_listen_backlog=32
lockd_servers=20
lockd_retransmit_timeout=5
grace_period=90
nfsmapid_domain=company.com
server_versmin=2
server_versmax=4
client_versmin=2
client_versmax=4
server_delegation=on
max_connections=-1
device=

status Subcommand

The status subcommand, which displays whether the specified protocol is enabled or disabled, supports the following option:
-h
Provides an online-help description
The status subcommand uses the following syntax:
# sharectl status [-h] [protocol]
The following example shows the status of the NFS protocol:
# sharectl status nfs
nfs       enabled

share Command

With this command, you can make a local file system on an NFS server available for mounting. You can also use the share command to display a list of the file systems on your system that are currently shared. The NFS server must be running for the share command to work.
The objects that can be shared include any directory tree. However, each file system hierarchy is limited by the disk slice or partition that the file system is located on.
A file system cannot be shared if that file system is part of a larger file system that is already being shared. For example, if /usr and /usr/local are on one disk slice, /usr can be shared or /usr/local can be shared. However, if both directories need to be shared with different share options, /usr/local must be moved to a separate disk slice.
You can gain access to a file system that is read-only shared through the file handle of a file system that is read-write shared. However, the two file systems have to be on the same disk slice. You can create a more secure situation. Place those file systems that need to be read-write on a separate partition or separate disk slice from the file systems that you need to share as read-only.

Note - For information about how NFS version 4 functions when a file system is unshared and then reshared, refer to Unsharing and Resharing a File System in NFS Version 4.

Non-File-System-Specific share Options

Some of the options that you can include with the -o flag are as follows.
rw|ro
The pathname file system is shared read-write or read-only for all clients.
rw=accesslist
The file system is shared read-write for the clients that are listed only. All other requests are denied. Starting with the Solaris 2.6 release, the list of clients that are defined in accesslist has been expanded. See Setting Access Lists With the share Command for more information. You can use this option to override an -ro option.

NFS-Specific share Options

The options that you can use with NFS file systems include the following.
aclok
This option enables an NFS server that supports the NFS version 2 protocol to be configured to do access control for NFS version 2 clients. Without this option, all clients are given minimal access. With this option, the clients have maximal access. For instance, on file systems that are shared with the -aclok option, if anyone has read permissions, everyone does. However, without this option, you can deny access to a client who should have access permissions. A decision to permit too much access or too little access depends on the security systems already in place. See Using Access Control Lists to Protect UFS Files in Oracle Solaris Administration: Security Services for more information about access control lists (ACLs).

Note - To use ACLs, ensure that clients and servers run software that supports the NFS version 3 and NFS_ACL protocols. If the software only supports the NFS version 3 protocol, clients obtain correct access but cannot manipulate the ACLs. If the software supports the NFS_ACL protocol, the clients obtain correct access and can manipulate the ACLs.

anon=uid
You use uid to select the user ID of unauthenticated users. If you set uid to -1, the server denies access to unauthenticated users. You can grant root access by setting anon=0, but this option allows unauthenticated users to have root access, so use the root option instead.
index=filename
When a user accesses an NFS URL, the -index=filename option forces the HTML file to load, instead of displaying a list of the directory. This option mimics the action of current browsers if an index.html file is found in the directory that the HTTP URL is accessing. This option is the equivalent of setting the DirectoryIndex option for httpd. For instance, suppose that the dfstab file entry resembles the following:
share -F nfs -o ro,public,index=index.html /export/web
These URLs then display the same information:
nfs://<server>/<dir>
nfs://<server>/<dir>/index.html
nfs://<server>//export/web/<dir>
nfs://<server>//export/web/<dir>/index.html
http://<server>/<dir>
http://<server>/<dir>/index.html
log=tag
This option specifies the tag in /etc/nfs/nfslog.conf that contains the NFS server logging configuration information for a file system. This option must be selected to enable NFS server logging.
nosuid
This option signals that all attempts to enable the setuid or setgid mode should be ignored. NFS clients cannot create files with the setuid or setgid bits on.
public
The -public option has been added to the share command to enable WebNFS browsing. Only one file system on a server can be shared with this option.
root=accesslist
The server gives root access to the hosts in the list. By default, the server does not give root access to any remote hosts. If the selected security mode is anything other than -sec=sys, you can only include client host names in the accesslist. Starting with the Solaris 2.6 release, the list of clients that are defined in accesslist is expanded. See Setting Access Lists With the share Command for more information.

CautionCaution - Granting root access to other hosts has wide security implications. Use the -root= option with extreme caution.

root=client-name
The client-name value is used with AUTH_SYS authentication to check the client's IP address against a list of addresses provided by exportfs(1B). If a match is found, root access is given to the file systems being shared.
root=host-name
For secure NFS modes, such as AUTH_SYS or RPCSEC_GSS, the server checks the clients' principal names against a list of host-based principal names that are derived from an access list. The generic syntax for the client's principal name is root@hostname. For Kerberos V the syntax is root/hostname.fully.qualified@REALM. When you use the host-name value, the clients on the access list must have the credentials for a principal name. For Kerberos V, the client must have a valid keytab entry for its root/hostname.fully.qualified@REALM principal name. For more information, see Configuring Kerberos Clients in Oracle Solaris Administration: Security Services.
sec=mode[:mode]
mode selects the security modes that are needed to obtain access to the file system. By default, the security mode is UNIX authentication. You can specify multiple modes, but use each security mode only once per command line. Each -mode option applies to any subsequent -rw, -ro, -rw=, -ro=, -root=, and -window= options until another -mode is encountered. The use of -sec=none maps all users to user nobody.
window=value
value selects the maximum lifetime in seconds of a credential on the NFS server. The default value is 30000 seconds or 8.3 hours.

Setting Access Lists With the share Command

The accesslist can include a domain name, a subnet number, or an entry to deny access, as well as the standard -ro=, -rw=, or -root= options. These extensions should simplify file access control on a single server without having to change the namespace or maintain long lists of clients.
This command provides read-only access for most systems but allows read-write access for rose and lilac:
# share -F nfs -o ro,rw=rose:lilac /usr/src
In the next example, read-only access is assigned to any host in the eng netgroup. The client rose is specifically given read-write access.
# share -F nfs -o ro=eng,rw=rose /usr/src

Note - You cannot specify both rw and ro without arguments. If no read-write option is specified, the default is read-write for all clients.

To share one file system with multiple clients, you must type all options on the same line. Multiple invocations of the share command on the same object “remember” only the last command that is run. This command enables read-write access to three client systems, but only rose and tulip are given access to the file system as root.
# share -F nfs -o rw=rose:lilac:tulip,root=rose:tulip /usr/src
When sharing a file system that uses multiple authentication mechanisms, ensure that you include the -ro, -ro=, -rw, -rw=, -root, and -window options after the correct security modes. In this example, UNIX authentication is selected for all hosts in the netgroup that is named eng. These hosts can only mount the file system in read-only mode. The hosts tulip and lilac can mount the file system read-write if these hosts use Diffie-Hellman authentication. With these options, tulip and lilac can mount the file system read-only even if these hosts are not using DH authentication. However, the host names must be listed in the eng netgroup.
# share -F nfs -o sec=dh,rw=tulip:lilac,sec=sys,ro=eng /usr/src
Even though UNIX authentication is the default security mode, UNIX authentication is not included if the -sec option is used. Therefore, you must include a -sec=sys option if UNIX authentication is to be used with any other authentication mechanism.
You can use a DNS domain name in the access list by preceding the actual domain name with a dot. The string that follows the dot is a domain name, not a fully qualified host name. The following entry allows mount access to all hosts in the eng.example.com domain:
# share -F nfs -o ro=.:.eng.example.com /export/share/man
In this example, the single “.” matches all hosts that are matched through the NIS namespace. The results that are returned from these name services do not include the domain name. The “.eng.example.com” entry matches all hosts that use DNS for namespace resolution. DNS always returns a fully qualified host name. So, the longer entry is required if you use a combination of DNS and the other namespaces.
You can use a subnet number in an access list by preceding the actual network number or the network name with “@”. This character differentiates the network name from a netgroup or a fully qualified host name. You must identify the subnet in either /etc/networks or in an NIS namespace. The following entries have the same effect if the 192.168 subnet has been identified as the eng network:
# share -F nfs -o ro=@eng /export/share/man
# share -F nfs -o ro=@192.168 /export/share/man
# share -F nfs -o ro=@192.168.0.0 /export/share/man
The last two entries show that you do not need to include the full network address.
If the network prefix is not byte aligned, as with Classless Inter-Domain Routing (CIDR), the mask length can be explicitly specified on the command line. The mask length is defined by following either the network name or the network number with a slash and the number of significant bits in the prefix of the address. For example:
# share -f nfs -o ro=@eng/17 /export/share/man
# share -F nfs -o ro=@192.168.0/17 /export/share/man
In these examples, the “/17” indicates that the first 17 bits in the address are to be used as the mask. For additional information about CIDR, look up RFC 1519.
You can also select negative access by placing a “-” before the entry. Note that the entries are read from left to right. Therefore, you must place the negative access entries before the entry that the negative access entries apply to:
# share -F nfs -o ro=-rose:.eng.example.com /export/share/man
This example would allow access to any hosts in the eng.example.com domain except the host that is named rose.

unshare Command

This command allows you to make a previously available file system unavailable for mounting by clients. When you unshare an NFS file system, access from clients with existing mounts is inhibited. The file system might still be mounted on the client, but the files are not accessible. The unshare command deletes the share permanently unless the -t option is used to temporarily unshare the file system.

Note - For information about how NFS version 4 functions when a file system is unshared and then reshared, refer to Unsharing and Resharing a File System in NFS Version 4.

The following is an example of unsharing a specific file system:
# unshare /usr/src

shareall Command

This command allows for multiple file systems to be shared. When used with no options, the command shares all entries in /etc/dfs/dfstab. You can include a file name to specify the name of a file that lists share command lines. If you do not include a file name, /etc/dfs/dfstab is checked. If you use a “-” to replace the file name, you can type share commands from standard input.
The following is an example of sharing all file systems that are listed in a local file:
# shareall /etc/dfs/special_dfstab

unshareall Command

This command makes all currently shared resources unavailable. The -F FSType option selects a list of file-system types that are defined in /etc/dfs/fstypes. This flag enables you to choose only certain types of file systems to be unshared. The default file-system type is defined in /etc/dfs/fstypes. To choose specific file systems, use the unshare command.
The following is an example of unsharing all NFS-type file systems:
# unshareall -F nfs

showmount Command

This command displays one of the following:
  • All clients that have remotely mounted file systems that are shared from an NFS server
  • Only the file systems that are mounted by clients
  • The shared file systems with the client access information

Note - The showmount command only shows NFS version 2 and version 3 exports. This command does not show NFS version 4 exports.

The command syntax is as follows:
showmount [ -ade ] [ hostname ]
-a
Prints a list of all the remote mounts. Each entry includes the client name and the directory.
-d
Prints a list of the directories that are remotely mounted by clients.
-e
Prints a list of the files that are shared or are exported.
hostname
Selects the NFS server to gather the information from.
If hostname is not specified, the local host is queried.
The following command lists all clients and the local directories that the clients have mounted:
# showmount -a bee
lilac:/export/share/man
lilac:/usr/src
rose:/usr/src
tulip:/export/share/man
The following command lists the directories that have been mounted:
# showmount -d bee
/export/share/man
/usr/src
The following command lists file systems that have been shared:
# showmount -e bee
/usr/src                                (everyone)
/export/share/man                    eng

setmnt Command

This command creates an /etc/mnttab table. The mount and umount commands consult the table. Generally, you do not have to run this command manually, as this command runs automatically when a system is booted.

nfsref Command

The nfsref command is used to add, delete or list NFSv4 referrals. The command syntax is as follows:
nfsref add path location [ location … ]
nfsref remove path
nfsref lookup path
path
Selects the name for the reparse point.
location
Identifies one or more NFS or SMB shared file systems to be associated with the reparse point.

Wednesday 5 February 2020

Fdisk & LVM partition

 Fdisk & LVM partition for Linux

multipath -ll
  274  la
  275  ls
  276  pwd
  277  fdisk /dev/mapper/mpathe
  278  ls -al
  279  partprobe
  280  ls -al
  281  pwd
  282  pvs
  283  pvcreate /dev/mapper/mpathe1
  284  pvs
  285  vgcreate -s 16M u01 /dev/mapper/mpathe1
  286  pvs
  287  vgdisplay
  288  lvcreate --name u01 --size 499.98G u01
  289  lvs
  290  xfs_mkfile /dev/mapper/u01-u01
  291  mkdir -p /u01
  292  mount /dev/mapper/u01-u01 /u01
  293  xfs_mkfile --help
  294  mkfs.xfs /dev/mapper/u01-u01
  295  mount /dev/mapper/u01-u01 /u01
  296  df -kh
  297  vs
  298  pvs
  299  vgs
  300  df -kh

++++++++++++++++++++++++++


Create and Extend XFS filesystem based on LVM

XFS is a File system which is designed for high performance ,scalability and Capacity point of view. It is generally used where large amount data to be stored / used on the File system. Some of the awesome freeze features of xfs are xfs_freeze, snapshot, xfs_unfreeze. One of the limitation of XFS is that we can not shrink or reduce this file system.
XFS is the default file system on CentOS 7 and RHEL 7. In this post we will discuss how to create and extend xfs file system based on LVM in CentOS 7. I am assuming that a new disk is assigned to Linux box and i am going to perform below steps on CentOS 7.

Step:1 Create a partition using fdisk

In the below example i have created 10GB partition on /dev/sdb and set “8e” as toggle id.

Create and Extend XFS filesystem based on LVM

XFS is a File system which is designed for high performance ,scalability and Capacity point of view. It is generally used where large amount data to be stored / used on the File system. Some of the awesome freeze features of xfs are xfs_freeze, snapshot, xfs_unfreeze. One of the limitation of XFS is that we can not shrink or reduce this file system.
XFS is the default file system on CentOS 7 and RHEL 7. In this post we will discuss how to create and extend xfs file system based on LVM in CentOS 7. I am assuming that a new disk is assigned to Linux box and i am going to perform below steps on CentOS 7.

Step:1 Create a partition using fdisk

In the below example i have created 10GB partition on /dev/sdb and set “8e” as toggle id.
fdisk_partition

Step:2 Create LVM components : pvcreate, vgcreate and lvcreate.

[root@linuxtechi ~]# pvcreate /dev/sdb1
 Physical volume "/dev/sdb1" successfully created
[root@linuxtechi ~]#

[root@linuxtechi ~]# vgcreate vg_xfs /dev/sdb1
 Volume group "vg_xfs" successfully created
[root@linuxtechi ~]#

[root@linuxtechi ~]# lvcreate -L +6G -n xfs_db vg_xfs
 Logical volume "xfs_db" created
[root@linuxtechi ~]#

Step:3 Create XFS file system on lvm parition “/dev/vg_xfs/xfs_db”

[root@linuxtechi ~]# mkfs.xfs /dev/vg_xfs/xfs_db
create_xfs_filesystem

Step:4 Mount the xfs file system

Create a directory named as xfs_test under /root and mount it using mount command.
mount_xfs_file_system
For the permanent mounting , use /etc/fstab file.

Step:5 Extend the size of xfs file system

Check the whether free space is available in Volume group (vg_xfs) or not using below command :
[root@linuxtechi ~]# vgs vg_xfs 
 VG #PV #LV #SN Attr VSize VFree
 vg_xfs 1 1 0 wz--n- 10.00g 4.00g
[root@linuxtechi ~]#
So we will extend the file system by 3GB using lvextend command with “-r” option
[root@linuxtechi ~]# lvextend -L +3G /dev/vg_xfs/xfs_db -r
lvextend_xfs
As we can see above that the size of “/dev/vg_xfs/xfs_db” has been extended from 6 GB to 9GB
Note : If xfs is not based on LVM , the use the xfs_growsfs command as shown below :
[root@linuxtechi ~]# xfs_growfs <Mount_Point> -D <Size>
The “-D size” option extend the file system to the specified size (expressed in file system blocks). Without the -D size option, xfs_growfs will extend the file system to the maximum size supported by the device.

Step:2 Create LVM components : pvcreate, vgcreate and lvcreate.

[root@linuxtechi ~]# pvcreate /dev/sdb1
 Physical volume "/dev/sdb1" successfully created
[root@linuxtechi ~]#

[root@linuxtechi ~]# vgcreate vg_xfs /dev/sdb1
 Volume group "vg_xfs" successfully created
[root@linuxtechi ~]#

[root@linuxtechi ~]# lvcreate -L +6G -n xfs_db vg_xfs
 Logical volume "xfs_db" created
[root@linuxtechi ~]#

Step:3 Create XFS file system on lvm parition “/dev/vg_xfs/xfs_db”

[root@linuxtechi ~]# mkfs.xfs /dev/vg_xfs/xfs_db
create_xfs_filesystem

Step:4 Mount the xfs file system

Create a directory named as xfs_test under /root and mount it using mount command.
mount_xfs_file_system
For the permanent mounting , use /etc/fstab file.

Step:5 Extend the size of xfs file system

Check the whether free space is available in Volume group (vg_xfs) or not using below command :
[root@linuxtechi ~]# vgs vg_xfs 
 VG #PV #LV #SN Attr VSize VFree
 vg_xfs 1 1 0 wz--n- 10.00g 4.00g
[root@linuxtechi ~]#
So we will extend the file system by 3GB using lvextend command with “-r” option
[root@linuxtechi ~]# lvextend -L +3G /dev/vg_xfs/xfs_db -r
lvextend_xfs
As we can see above that the size of “/dev/vg_xfs/xfs_db” has been extended from 6 GB to 9GB
Note : If xfs is not based on LVM , the use the xfs_growsfs command as shown below :
[root@linuxtechi ~]# xfs_growfs <Mount_Point> -D <Size>
The “-D size” option extend the file system to the specified size (expressed in file system blocks). Without the -D size option, xfs_growfs will extend the file system to the maximum size supported by the device.


1) Moving Extents to Existing Physical Volumes

Use the pvs command to check if the desired physical volume (we plan to remove the “/dev/sdb1” disk in LVM) is used or not.

# pvs -o+pv_used

PV VG Fmt Attr PSize PFree Used
/dev/sda1 myvg lvm2 a- 75.00G 14.00G 61.00G
/dev/sdb1 myvg lvm2 a- 50.00G 45.00G 5.00G
/dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G

If this is used, check to see if there are enough free extents on the other physics volumes in the volume group.

If so, you can run the pvmove command on the device you want to remove. Extents will be distributed to other devices.

# pvmove /dev/sdb1

/dev/sdb1: Moved: 2.0%
…
/dev/sdb1: Moved: 79.2%
…
/dev/sdb1: Moved: 100.0%

When the pvmove command is complete. Re-use the pvs command to check whether the physics volume is free or not.

# pvs -o+pv_used

PV VG Fmt Attr PSize PFree Used
/dev/sda1 myvg lvm2 a- 75.00G 9.00G 66.00G
/dev/sdb1 myvg lvm2 a- 50.00G 50.00G 0
/dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G

If it’s free, use the vgreduce command to remove the physical volume /dev/sdb1 from the volume group.

# vgreduce myvg /dev/sdb1
Removed "/dev/sdb1" from volume group "myvg"

Finally, run the pvremove command to remove the disk from the LVM configuration. Now, the disk is completely removed from the LVM and can be used for other purposes.

# pvremove /dev/sdb1
Labels on physical volume "/dev/sdb1" successfully wiped.

2) Moving Extents to a New Disk

If you don’t have enough free extents on the other physics volumes in the volume group. Add new physical volume using the steps below.

Request new LUNs from the storage team. Once this is allocated, run the following commands to discover newly added LUNs or disks in Linux.

# ls /sys/class/scsi_host
host0
# echo "- - -" > /sys/class/scsi_host/host0/scan
# fdisk -l

Once the disk is detected in the OS, use the pvcreate command to create the physical volume.

# pvcreate /dev/sdd1
Physical volume "/dev/sdd1" successfully created

Use the following command to add new physical volume /dev/sdd1 to the existing volume group vg01.

# vgextend vg01 /dev/sdd1
Volume group "vg01" successfully extended

Now, use the pvs command to see the new disk “/dev/sdd1” that you have added.

# pvs -o+pv_used

PV VG Fmt Attr PSize PFree Used
/dev/sda1 myvg lvm2 a- 75.00G 14.00G 61.00G
/dev/sdb1 myvg lvm2 a- 50.00G 0 50.00G
/dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G
/dev/sdd1 myvg lvm2 a- 60.00G 60.00G 0

Use the pvmove command to move the data from /dev/sdb1 to /dev/sdd1.

# pvmove /dev/sdb1 /dev/sdd1

/dev/sdb1: Moved: 10.0%
…
/dev/sdb1: Moved: 79.7%
…
/dev/sdb1: Moved: 100.0%

After the data is moved to the new disk. Re-use the pvs command to check whether the physics volume is free.

# pvs -o+pv_used

PV VG Fmt Attr PSize PFree Used
/dev/sda1 myvg lvm2 a- 75.00G 14.00G 61.00G
/dev/sdb1 myvg lvm2 a- 50.00G 50.00G 0
/dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G
/dev/sdd1 myvg lvm2 a- 60.00G 10.00G 50.00G

If it’s free, use the vgreduce command to remove the physical volume /dev/sdb1 from the volume group.

# vgreduce myvg /dev/sdb1
Removed "/dev/sdb1" from volume group "myvg"

Finally, run the pvremove command to remove the disk from the LVM configuration. Now, the disk is completely removed from the LVM and can be used for other purposes.

# pvremove /dev/sdb1
Labels on physical volume "/dev/sdb1" successfully wiped.
 

Moving Extents to Existing Physical Volumes

In this example, the logical volume is distributed across four physical volumes in the volume group myvg.
# pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda1 myvg lvm2 a- 17.15G 12.15G 5.00G /dev/sdb1 myvg lvm2 a- 17.15G 12.15G 5.00G /dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G /dev/sdd1 myvg lvm2 a- 17.15G 2.15G 15.00G
This examples moves the extents off of /dev/sdb1 so that it can be removed from the volume group.
  1. If there are enough free extents on the other physical volumes in the volume group, you can execute the pvmove command on the device you want to remove with no other options and the extents will be distributed to the other devices.
    # pvmove /dev/sdb1
      /dev/sdb1: Moved: 2.0%
     ...
      /dev/sdb1: Moved: 79.2%
     ...
      /dev/sdb1: Moved: 100.0%
    
    After the pvmove command has finished executing, the distribution of extents is as follows:
    # pvs -o+pv_used
      PV         VG   Fmt  Attr PSize  PFree  Used
      /dev/sda1  myvg lvm2 a-   17.15G  7.15G 10.00G
      /dev/sdb1  myvg lvm2 a-   17.15G 17.15G     0
      /dev/sdc1  myvg lvm2 a-   17.15G 12.15G  5.00G
      /dev/sdd1  myvg lvm2 a-   17.15G  2.15G 15.00G
    
  2. Use the vgreduce command to remove the physical volume /dev/sdb1 from the volume group.
    # vgreduce myvg /dev/sdb1
      Removed "/dev/sdb1" from volume group "myvg"
    # pvs
      PV         VG   Fmt  Attr PSize  PFree
      /dev/sda1  myvg lvm2 a-   17.15G  7.15G
      /dev/sdb1       lvm2 --   17.15G 17.15G
      /dev/sdc1  myvg lvm2 a-   17.15G 12.15G
      /dev/sdd1  myvg lvm2 a-   17.15G  2.15G
    
The disk can now be physically removed or allocated to other users.

5.4.2. Moving Extents to a New Disk

In this example, the logical volume is distributed across three physical volumes in the volume group myvg as follows:
# pvs -o+pv_used
  PV         VG   Fmt  Attr PSize  PFree  Used
  /dev/sda1  myvg lvm2 a-   17.15G  7.15G 10.00G
  /dev/sdb1  myvg lvm2 a-   17.15G 15.15G  2.00G
  /dev/sdc1  myvg lvm2 a-   17.15G 15.15G  2.00G
This example procedure moves the extents of /dev/sdb1 to a new device, /dev/sdd1.
  1. Create a new physical volume from /dev/sdd1.
    # pvcreate /dev/sdd1
      Physical volume "/dev/sdd1" successfully created
    
  2. Add the new physical volume /dev/sdd1 to the existing volume group myvg.
    # vgextend myvg /dev/sdd1
      Volume group "myvg" successfully extended
    # pvs -o+pv_used
      PV         VG   Fmt  Attr PSize  PFree  Used
      /dev/sda1   myvg lvm2 a-   17.15G  7.15G 10.00G
      /dev/sdb1   myvg lvm2 a-   17.15G 15.15G  2.00G
      /dev/sdc1   myvg lvm2 a-   17.15G 15.15G  2.00G
      /dev/sdd1   myvg lvm2 a-   17.15G 17.15G     0
    
  3. Use the pvmove command to move the data from /dev/sdb1 to /dev/sdd1.
    # pvmove /dev/sdb1 /dev/sdd1
      /dev/sdb1: Moved: 10.0%
    ...
      /dev/sdb1: Moved: 79.7%
    ...
      /dev/sdb1: Moved: 100.0%
    
    # pvs -o+pv_used
      PV          VG   Fmt  Attr PSize  PFree  Used
      /dev/sda1   myvg lvm2 a-   17.15G  7.15G 10.00G
      /dev/sdb1   myvg lvm2 a-   17.15G 17.15G     0
      /dev/sdc1   myvg lvm2 a-   17.15G 15.15G  2.00G
      /dev/sdd1   myvg lvm2 a-   17.15G 15.15G  2.00G
    
  4. After you have moved the data off /dev/sdb1, you can remove it from the volume group.
    # vgreduce myvg /dev/sdb1
      Removed "/dev/sdb1" from volume group "myvg" 
     
     

    This article will serve solution for below questions :

  5. How to safely remove the disk from LVM
  6. How to remove the disk from VG online
  7. How to copy data from one disk to other at the physical level
  8. How to replace a faulty disk in LVM online
  9. How to move physical extents from one disk to another
  10. How to free up disk from VG to shrink VG size
  11. How to safely reduce VG

We have volume group named vg01 which has 20M logical volume created in it and mounted it on /mydata mount point. Check lsblk output below –

root@kerneltalks # lsblk
NAME         MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda         202:0    0  10G  0 disk
├─xvda1      202:1    0   1M  0 part
└─xvda2      202:2    0  10G  0 part /
xvdf         202:80   0   1G  0 disk
└─vg01-lvol1 253:0    0  20M  0 lvm  /mydata

Now, attach new disk of the same or bigger size of the disk /dev/xvdf. Identify the new disk on the system by using lsblk command again and comparing the output to the previous one.

root@kerneltalks # lsblk
NAME         MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda         202:0    0  10G  0 disk
├─xvda1      202:1    0   1M  0 part
└─xvda2      202:2    0  10G  0 part /
xvdf         202:80   0   1G  0 disk
└─vg01-lvol1 253:0    0  20M  0 lvm  /mydata
xvdg         202:96   0   1G  0 disk

You can see the new disk has been identified as /dev/xvdg. Now, we will add this disk to current VG vg01. This can be done using vgextend command. Obviously, before using it in LVM you need to run pvcreate on it.

root@kerneltalks # pvcreate /dev/xvdg
  Physical volume "/dev/xvdg" successfully created.
root@kerneltalks # vgextend vg01 /dev/xvdg
  Volume group "vg01" successfully extended

Now we have disk to be removed /dev/xvdf and new disk to be added /dev/xvdg in the same volume group vg01. You can verify it using pvs command

root@kerneltalks # pvs
  PV         VG   Fmt  Attr PSize    PFree
  /dev/xvdf  vg01 lvm2 a--  1020.00m 1000.00m
  /dev/xvdg  vg01 lvm2 a--  1020.00m 1020.00m

Observe the above output. Since we created a 20M mount point from disk /dev/xvdf it has 20M less free size. The new disk /dev/xvdg is completely free.

Now, we need to move physical extents from disk xvdf to xvdg. pvmove is the command used to achieve this. You just need to supply a disk name from where you need to move out PE. Command will move PE out of that disk and write them to all available disks in the same volume group. In our case, only one other disk is available to move PE.

root@kerneltalks # pvmove /dev/xvdf
  /dev/xvdf: Moved: 0.00%
  /dev/xvdf: Moved: 100.00%

Move progress is shown periodically. If due to any reason operation interrupted in between then moved PE will remain at destination disks and un-moved PEs will remain on the source disk. The operation can be resumed by issuing the same command again. It will then move the remaining PE out of the source disk.

You can even run it in background with nohup.

root@kerneltalks # pvmove /dev/xvdf 2>error.log >normal.log &amp;
[1] 1639

In the above command, it will run pvmove in the background. It will redirect normal console outputs in normal.log file under the current working directory whereas errors will be redirected and saved in error.log file in the current working directory.

Now if you check pvs output again, you will find all space on disk xvdf is free which means its not been used to store any data in that VG. This ensures you can remove the disk without any issues.

root@kerneltalks # pvs
  PV         VG   Fmt  Attr PSize    PFree
  /dev/xvdf  vg01 lvm2 a--  1020.00m 1020.00m
  /dev/xvdg  vg01 lvm2 a--  1020.00m 1000.00m

Before removing/detaching disk from the server, you need to remove it from LVM. You can do this by reducing VG and opting for that disk out.

root@kerneltalks # vgreduce vg01 /dev/xvdf
  Removed "/dev/xvdf" from volume group "vg01"

Now disk xvdf can be removed/detached from server safely.

Few useful switches of pvmove :

Verbose mode prints more detailed information on the operation. It can be invoked by using -v switch.

root@kerneltalks # pvmove -v /dev/xvdf
    Cluster mirror log daemon is not running.
    Wiping internal VG cache
    Wiping cache of LVM-capable devices
    Archiving volume group "vg01" metadata (seqno 17).
    Creating logical volume pvmove0
    activation/volume_list configuration setting not defined: Checking only host tags for vg01/lvol1.
    Moving 5 extents of logical volume vg01/lvol1.
    activation/volume_list configuration setting not defined: Checking only host tags for vg01/lvol1.
    Creating vg01-pvmove0
    Loading table for vg01-pvmove0 (253:1).
    Loading table for vg01-lvol1 (253:0).
    Suspending vg01-lvol1 (253:0) with device flush
    Resuming vg01-pvmove0 (253:1).
    Resuming vg01-lvol1 (253:0).
    Creating volume group backup "/etc/lvm/backup/vg01" (seqno 18).
    activation/volume_list configuration setting not defined: Checking only host tags for vg01/pvmove0.
    Checking progress before waiting every 15 seconds.
  /dev/xvdf: Moved: 0.00%
  /dev/xvdf: Moved: 100.00%
    Polling finished successfully.

The interval at which command updates the progress can be changed. -i switch followed by a number of seconds can be used to get updates from command on user-defined intervals on progress.

root@kerneltalks # pvmove -i 1 /dev/xvdf
⇠ Previous article 
 
 

In this example, we will be deleting “testlv” from the volume group “datavg”. The LV is mounted on the mount point /data01.

# df -hP | grep -i data01
/dev/mapper/datavg-testlv  976M  2.6M  907M   1% /data01
# lvs
  LV     VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root   centos -wi-ao---- 17.47g
  swap   centos -wi-ao----  2.00g
  testlv datavg -wi-ao----  1.00g
# 1 root@arch-bill /home/bill # gdisk /dev/sdb                                                                                                                                                                                              :(
GPT fdisk (gdisk) version 0.8.10

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): t
Partition number (1-6): 2
Current type is 'Microsoft basic data'
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

Command (? for help): t
Partition number (1-6): 3
Current type is 'Microsoft basic data'
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

Command (? for help): t
Partition number (1-6): 4
Current type is 'Microsoft basic data'
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

Command (? for help): t
Partition number (1-6): 6
Current type is 'Microsoft basic data'
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

Command (? for help): p
Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 0645408C-0374-4357-8663-D2A3512E07BD
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 4204653 sectors (2.0 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            6143   2.0 MiB     EF02  
   2            8192         8396799   4.0 GiB     8300  
   3         8398848        41953279   16.0 GiB    8300  
   4        41955328       167786495   60.0 GiB    8300  
   6       167788544      3902834687   1.7 TiB     8300  

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.
1 root@arch-bill /home/bill # fdisk -l                                                                                                                                                                                                    :(

Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0645408C-0374-4357-8663-D2A3512E07BD

Device           Start          End   Size Type
/dev/sdb1         2048         6143     2M BIOS boot partition
/dev/sdb2         8192      8396799     4G Linux filesystem
/dev/sdb3      8398848     41953279    16G Linux filesystem
/dev/sdb4     41955328    167786495    60G Linux filesystem
/dev/sdb6    167788544   3902834687   1.8T Linux filesystem


Disk /dev/sdc: 298.1 GiB, 320072933376 bytes, 625142448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5607E1F7-1A96-4EF5-A353-29BE91873431

Device           Start          End   Size Type
/dev/sdc1         2048      6293503     3G Linux swap
/dev/sdc2      6295552    618600447   292G Microsoft basic data


Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C3E095E9-90D9-4BFA-A97F-5D74E64FC4A4

Device           Start          End   Size Type
/dev/sda1         8192     32776191  15.6G Microsoft basic data
/dev/sda2     32778240   1953509375 915.9G Microsoft basic data
/dev/sda3         2048         6143     2M BIOS boot partition

root@arch-bill /home/bill # 

1. Delete the entry of the mount point from the /etc/fstab :

# cat /etc/fstab
...
/dev/mapper/datavg-testlv            /data01              ext4    defaults        0 0
...

2. Unmount the mount point :

# umount /data01

3. Disable lvm :

# lvchange -an /dev/datavg/testlv

4. Delete lvm volume :

# lvremove /dev/datavg/testlv

5. Disable volume group :

# vgchange -an datavg

6. Delete volume group :

# vgremove datavg

7. Delete physical Volumes being used for the volume group “datavg” :

# pvremove /dev/sdb  /dev/sdc
      =============
root volume extended in linux
===============
 

Process summary

The process is straightforward. Attach the new storage to the system. Next, create a new Physical Volume (PV) from that storage. Add the PV to the Volume Group (VG) and then extend the Logical Volume (LV).

Look at the picture below. The red line mark shows the original size of the root mount point. The xvdc disk is the new disk attached to it. Extend the root partition to make it 60G in size.

Image
lsblk command displays volume sizes
Figure 1: Use the lsblk command to display volume information.

[ Want to test your sysadmin skills? Take a skills assessment today. ]

Create a Physical Volume

Image
pvcreate command to create a new physical volume
Use the pvcreate command to designate a disk as a PV.
[root@redhat-sysadmin ~]# pvcreate /dev/xvdc
  Physical volume "/dev/xvdc" successfully created.

When you attach the new storage /dev/xvdc, you need to use the pvcreate command in order for the disk to be initialized and be seen by the Logical Volume Manager (LVM).

[ You might also like: Creating and managing partitions in Linux with parted ]

Identify the Volume Group

Next, you need to identify the Volume Group (VG) to which you are extending the new disk with the vgs command. Mine is called centos, and that's the only VG available currently on my LVM.

Image
vgs command displays volume group information
Figure 3: Use the vgs command to display Volume Group information.

Extend the Volume Group

The vgextend command allows you to add one or more initialized Physical Volumes to an existing VG to extend its size.

As you can see, you want to extend the centos Volume Group.

Image
vgextend command
Figure 4: The vgextend command adds capacity to the VG.

After extending it, type the vgs or vgdisplay commands for a more detailed overview of the VG.

The vgs command shows only the VG in with a few lines.

Image
vgs command displays volume group information
Figure 5: Use the vgs command to display VG information.

The vgdisplay shows all the VGs in the LVM and displays the complete information about them.

Image
vgdisplay command displays volume group information
Figure 6: Use the vgdisplay command to display VG information.

As you can see from the image above, marked with red, you have 10GB free. You can decide to extend all or some amount of storage size to it.

Identify the Logical Volume

The lvs or lvdisplay command shows the Logical Volume associated with a Volume Group. Use the lvs command, and the Logical Volume you're trying to extend is the root, which belongs to the centos VG. As you can see above, you've already extended the VG. Next, extend the Logical Volume.

Image
lvs command displays logical volume information
Figure 7: Use the lvs command to display LV information.

Extend the Logical Volume

Extend the LV with the lvextend command. The lvextend command allows you to extend the size of the Logical Volume from the Volume Group.

Image
lvextend command displays logical volume information
Figure 8: Use the lvextend command to extend the LV.
[root@redhat-sysadmin ~]# lvextend -l +100%FREE /dev/centos/root.

Extend the filesystem

You need to confirm the filesystem type you're using, Red Hat uses the XFS filesystem, but you can check the filesystem with lsblk -f or df -Th.

Resize the filesystem on the Logical Volume after it has been extended to show the changes. Resize the XFS filesystem by using the xfs_growfs command.

Image
xfs_grow command extends the XFS filesystem
Figure 9: Use the xfs_growfs command to grow the filesystem on the newly extended LV.

Finally, verify the size of your extended partition.

Image
df -h command displays storage capacity
Figure 10: Use the df -h command to display storage information.

[ Free online course: Red Hat Enterprise Linux technical overview. ]