Wednesday, 15 December 2021

User permissions in NFS mounted directory

User permissions in NFS mounted directory nfs I have oracle linux 6.7, a NFS server in Windows, and I am trying to mount a shared folder in Linux. The Windows NFS server has a shared mount : 192.168.1.10:/OracleBK In my oracle linux server, I created a folder , /orabackup and the oracle user from oinstall group is the owner of this folder : mkdir /orabackup chown -R oracle:oinstall /orabackup chmod -R 777 /orabackup mount -t nfs -o rw 192.168.1.10:/OracleBK /orabackup The /etc/fstab corresponding line is 192.168.1.10:/OracleBK /orabackup nfs defaults 0 0 The command for mounting the folder used is : mount /orabackup Now , the "orabackup" folder is mounted . However the oracle user cannot read and write, and needs read and write permissions to this directory. The root user can read and write. What should be done to give full permissions to the oracle user ? Best Answer NFS checks access permissions against user ids (UIDs). The UID of the user on your local machine needs to match the UID of the owner of the files you are trying to access on the server. I would suggest to go to the server and look at the file permissions. Which UID (find out with id username) do they belong to and which permissions are set? And if you are the only one accessing the files on the server, you can make the server pretend that all request come from the proper UID. For that, NFS has the option all_squash. It tells the server to map all request to the anonymous user, specified by anonuid,anongid. Add these options: all_squash,anonuid=1026,anongid=100 to the export in /etc/exports. Be warned though, that this will make anyone mounting the export effectively the owner of those files. Related Question Default directory permissions over NFS Linux – Permission denied when trying to access mounted windows NFS How to specify owner and permissions for an NFS mount

Saturday, 30 October 2021

Backup user create for solaris 11.4

pcred $$ tail -l /etc/passwd tail -l /etc/security/auth_attr ppriv -l basic ppriv $$ useradd -m -K defaultpriv=basic,file_dac_read backup grep backup /etc/user_attr tail -l /etc/shadow ppriv $$ passwd backup _______________ +++++++++++++++++++ _______________ Discribe User IDs The simple command id(/usr/bin/id) can be used to display the current users User ID and Group ID. It is these IDs that are used when accessing resources and help control access. For the current user we can see that the User ID is 100 and Group ID 100. Each running process also maintains the IDs that the process is running as. This will show 3 User IDs and 3 Group Ids Real: The actual ID used to start the process Effective: Shows if it was run with sudo or similar. Set: Shows if the Set UID bit or Set GID bit is set on the program which controls the accounts used when the process runs. This is set by default on programs such as /usr/bin/passwd. The variable $$ contains the name of the currently running process so if we use the command: pcred $$ We can display the credential used for the current process which will be the BASH shell in our case: When as standard user run the passwd program it will run as the user root. The SUID permission is set on this program. We can demonstrate this by running the passwd program and leaving it running. From another terminal we can search for the process and display the credentials: sudo pcred $(pgrep passwd) Here we can see the REAL UID is 100 but the EFFECTIVE and SET UID is 0 for the root user. Creating a User A user with root privileges can create new local users to the system using the command useradd(/usr/sbin/useradd). Not all options need to be provided with the command; default values can be displayed with: useradd -D We can see from the above output that the default user shell will be bash and the users’ home directory will be located in /export/home is not specified at the time the user is created. To create a new user we can use the command useradd -m bob The -m option creates the user’s home directory immediately rather than on first log in. User accounts are stored in the file /etc/passwd. The new user will be the last entry in the file so we can use the command: tail -1 /etc/passwd to display the entry. Output from the command: id bob Will show the group and user ids. Using the command: finger bob We can display user information including last login times. Setting the Users password We have created the user bob; as yet he does not have a password. User passwords are stored in the file /etc/shadow. tail -1 /etc/shadow Here we can see the user bob. The password is the 2nd field shown as UP in the output. This is the password status and can be seen also with the command : passwd -s bob UP indicates that the password is as yet unset by the administrator and the account cannot be used. The initial setting of the users password is known as activating the account. To activate the password the root user or a user with the privileges to set the password: solaris.passwd.assign solaris.account.activate We can then simply set the password for the user with: passwd bob We will need to enter the password twice to verify our typing expertise. The passwd status should now show as PS indicating that the password is set. passwd -s bob We now have a functioning account for the user bob. Assigning roles to users If the new user bob needs to carry out administrative duties we will find that he cannot use the substitute user command to gain root permissions, even if he does know the password. If we add the user bob to the root role he then will be able to use su. usermod -R root bob We can display the roles associated with a user using the roles command roles bob

Sunday, 29 August 2021

MTU for LINUX

Show MTU running ---------------- # ip a | grep mtu Temporarily changing the MTU Size – Using ifconfig command We can use the ifconfig command to change the MTU size of a system’s network interface. However, remember that this change does survive a reboot and returns to the default value i.e. 1500. Setup MTU ----------------- ifconfig mtu up Oracle Linux: How to Change MTU Size (Doc ID 2520148.1) To BottomTo Bottom Solution Check the current MTU settings You can use both ifconfig and ip command to check it: # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 02:00:17:00:XX:XX inet addr: Bcast:XXX.XXX.XXX.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:2786436 errors:0 dropped:0 overruns:0 frame:0 TX packets:3744195 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:789337509 (752.7 MiB) TX bytes:654466831 (624.1 MiB) [root@j-ol6-8324 opc]# ip link list 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: mtu 9000 qdisc mq state UP qlen 1000 link/ether 02:00:17:00:XX:XX brd ff:ff:ff:ff:ff:ff Change the MTU size by ifconfig or ip commands # ifconfig $DEV mtu 1400 or # ip link set $DEV mtu 1400 For instance: # ifconfig eth0 mtu 1500 Or: # ip link set dev eth0 mtu 1500 # ip link list 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 02:00:17:00:XX:XX brd ff:ff:ff:ff:ff:ff Make the setting permanent by appending the line 'MTU=' in /etc/sysconfig/network-scripts/ifcfg-*: # grep MTU /etc/sysconfig/network-scripts/ifcfg-* /etc/sysconfig/network-scripts/ifcfg-eth0:MTU=9000 Restart the network service(for Oralce Linux 6): # service network restart Or (OL7) # systemctl restart network Test it from a remote machine by ping to probe the packet path: $ for x in 1462 1463 1472 1473 1500 9000; do echo Lenth $x; ping -c 3 -M do -s $x xxx.xxx.xxx.xxx; done Lenth 1462 PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) 1462(1490) bytes of data. 1470 bytes from xxx.xxx.xxx.xxx: icmp_seq=1 ttl=39 time=231 ms 1470 bytes from xxx.xxx.xxx.xxx: icmp_seq=2 ttl=39 time=231 ms 1470 bytes from xxx.xxx.xxx.xxx: icmp_seq=3 ttl=39 time=243 ms --- xxx.xxx.xxx.xxx ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2244ms rtt min/avg/max/mdev = 231.508/235.597/243.448/5.553 ms Lenth 1463 PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) 1463(1491) bytes of data. 1471 bytes from xxx.xxx.xxx.xxx: icmp_seq=1 ttl=39 time=243 ms 1471 bytes from xxx.xxx.xxx.xxx: icmp_seq=2 ttl=39 time=231 ms 1471 bytes from xxx.xxx.xxx.xxx: icmp_seq=3 ttl=39 time=243 ms --- xxx.xxx.xxx.xxx ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2246ms rtt min/avg/max/mdev = 231.495/239.439/243.632/5.634 ms Lenth 1472 >>>>>>>>>>>> Remote instance MTU Size=Data Size+IP header(20 bytes) +ICMP header(8 bytes)=1472+20+8=1500 PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) 1472(1500) bytes of data. 1480 bytes from xxx.xxx.xxx.xxx: icmp_seq=1 ttl=39 time=243 ms 1480 bytes from xxx.xxx.xxx.xxx: icmp_seq=2 ttl=39 time=231 ms 1480 bytes from xxx.xxx.xxx.xxx: icmp_seq=3 ttl=39 time=231 ms --- xxx.xxx.xxx.xxx ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2234ms rtt min/avg/max/mdev = 231.344/235.478/243.590/5.749 ms Lenth 1473 PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) 1473(1501) bytes of data. --- xxx.xxx.xxx.xxx ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 11999ms Lenth 1500 PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) 1500(1528) bytes of data. --- xxx.xxx.xxx.xxx ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 11999ms Lenth 9000 PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) 9000(9028) bytes of data. ping: local error: Message too long, mtu=9000 ping: local error: Message too long, mtu=9000 ping: local error: Message too long, mtu=9000 --- xxx.xxx.xxx.xxx ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2999ms

Wednesday, 14 July 2021

Oracle Linux: How To Downgrade UEK5 To UEK4 kernel

The solution in this guide can be applied on Virtual and Physical Machine. Solution 1. If you have a UEK4 kernel still available, boot that and then delete the UEK5 kernel(s). yum remove $(rpm -qa kernel-uek | grep 4.1.35) Note: Please check the kernels that are going to be removed before you hit "y". 2. If you don't have a UEK4 kernel, then install it. yum install kernel-uek-4.1.12* Or yum install *4.1.12* 3. Boot that UEK4 kernel and then remove the UEK5 kernel as shown in step 1. 4. Please ensure you've disabled the UEK5 repo. Once you've disabled the UEK5 repo, so run the command below. yum list extras 5. If that shows up anything that was in the UEK5 repo then "yum downgrade ..." for those rpms will downgrade them to their earlier version. Note: Do them all at once: yum downgrade's dependency calculations don't work (well, it doesn't have any as such). It is highly recommended to back-up the state of the system prior to any patching: For Oracle Guest VM Backup, please refer to this KM Doc Oracle VM: How To Backup And Restore A VM Guest (domU) Domain On Oracle VM 3.x (Doc ID 1477421.1) For other Guest VM such as VMware and Microsoft Hyper-V, please engage with the Corresponding Support Vendor. For the Physical Machine, please back-up the system to an external storage (e.g. tape, storage snapshot, storage dedup, or any other third party backup solution, etc.)

Sunday, 20 June 2021

network IP setup for Solaris 11.4

Commonly Used Network Administration CommandsNote - Some of following commands include parameters and values that are provided as examples only.ActionCommandAdministering DatalinksDisplay all of the datalinks (physical and virtual) on a system.# dladm show-linkDisplay all of the physical datalinks on a system.# dladm show-physDisplay all of the properties for all of the datalinks on a system.# dladm show-linkpropDisplay all of the properties for a specific datalink on a system.# dladm show-linkprop net0Display a specific property for a specific datalink on a system.# dladm show-linkprop -p mtu net0Administering IP Interfaces and AddressesDisplay general information about a system's IP interfaces.# ipadmDisplay a system's IP interfaces and addresses.# ipadm show-addrCreate an IP interface and then configure a static IPv4 address for that interface.# ipadm create-ip net0# ipadm create-addr -a 203.0.113.0/24 net0/addrObtain an IP address from a DHCP server.# ipadm create-ip net0# ipadm create-addr -T dhcp net0/addrCreate an auto-generated IPv6 address.# ipadm create-ip net0# ipadm create-addr -T addrconf net0/addrChange the netmask property for an IP address object name (net3/v4) to 8.# ipadm set-addrprop -p prefixlen=8 net3/v4Configure a persistent default route on a system.# route -p add default 192.0.2.1/27Configure a persistent default route by specifying a name.# route -p add IP-address -name route1persistent: route add IP-address -name route1Configure a static route on a system.# route -p add -net 192.0.2.35/27 -gateway 192.0.2.1/27Display a system's default route.# route -p showDelete a persistent route by specifying a name.# route -p delete -name route1delete host -name route1 route-IP: gateway gateway-IP: not in tabledelete persistent host -name route1 route-IP: gateway gateway-IPIf you do not specify the -p option with the -name option, the route is removedfrom the routing tables only.Configure a system's host name.# hostname hostnameAdministering Naming ServicesConfigure DNS on a system# svccfg -s dns/client setprop config/nameserver=net_address: 192.0.2.1/27# svccfg -s dns/client setprop config/domain = astring: "myhost.org"# svccfg -s name-service/switch setprop config/host = astring: "files dns"# svcadm refresh name-service/switch# svcadm refresh dns/client# svcadm enable dns/clientE61478, August 2018 Oracle Solaris 11.4 Network Administration CheatsheetActionCommandAdministering External Network Modifiers (ENMs)List all of the ENMs on a system.# netadm listEnable an ENM named myenm.# netadm enable myenmAdministering Wireless NetworksDisplay information about available wireless networks.# dladm scan-wifiConnect to an unsecured wireless network with the strongest signal.# dladm connect-wifiConnect to an unsecured wireless network by specifying an ESSID.# dladm connect-wifi -eESSIDCheck the status of the wireless network to which the system is currentlyconnected.# dladm show-wifi

Tuesday, 6 April 2021

Linux Block Port With IPtables Command

Linux Block Port With IPtables Command 1. TCP port 80 – HTTP Server 2. TCP port 443 – HTTPS Server 3. TCP port 25 – Mail Server 4. TCP port 22 – OpenSSH (remote) secure shell server 5. TCP port 110 – POP3 (Post Office Protocol v3) server 6. TCP port 143 – Internet Message Access Protocol (IMAP) — management of email messages 7. TCP / UDP port 53 – Domain Name System (DNS) Linux block Incoming Port With IPtables The syntax is as follows to block incoming port using IPtables: /sbin/iptables -A INPUT -p tcp --destination-port {PORT-NUMBER-HERE} -j DROP ### interface section - use eth1 ### /sbin/iptables -A INPUT -i eth1 -p tcp --destination-port {PORT-NUMBER-HERE} -j DROP ### only drop port for given IP or Subnet ## /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port {PORT-NUMBER-HERE} -s {IP-ADDRESS-HERE} -j DROP /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port {PORT-NUMBER-HERE} -s {IP/SUBNET-HERE} -j DROP To block port 80 (HTTP server), enter (or add to your iptables shell script): # /sbin/iptables -A INPUT -p tcp --destination-port 80 -j DROP # /sbin/service iptables save See how to save iptables firewall rules permanently on Linux for more information. Block Incoming Port 80 except for IP Address 1.2.3.4 # /sbin/iptables -A INPUT -p tcp -i eth1 ! -s 1.2.3.4 --dport 80 -j DROP Block Outgoing Port The syntax is as follows: /sbin/iptables -A OUTPUT -p tcp --dport {PORT-NUMBER-HERE} -j DROP ### interface section use eth1 ### /sbin/iptables -A OUTPUT -o eth1 -p tcp --dport {PORT-NUMBER-HERE} -j DROP ### only drop port for given IP or Subnet ## /sbin/iptables -A OUTPUT -o eth0 -p tcp --destination-port {PORT-NUMBER-HERE} -s {IP-ADDRESS-HERE} -j DROP /sbin/iptables -A OUTPUT -o eth0 -p tcp --destination-port {PORT-NUMBER-HERE} -s {IP/SUBNET-HERE} -j DROP To block outgoing port # 25, enter: # /sbin/iptables -A OUTPUT -p tcp --dport 25 -j DROP # /sbin/service iptables save You can block port # 1234 for IP address 192.168.1.2 only: # /sbin/iptables -A OUTPUT -p tcp -d 192.168.1.2 --dport 1234 -j DROP # /sbin/service iptables save How Do I Log Dropped Port Details? Use the following syntax: # Logging # ### If you would like to log dropped packets to syslog, first log it ### /sbin/iptables -A INPUT -m limit --limit 5/min -j LOG --log-prefix "PORT 80 DROP: " --log-level 7 ### now drop it ### /sbin/iptables -A INPUT -p tcp --destination-port 80 -j DROP How Do I Block Cracker (IP: 123.1.2.3) Access To UDP Port # 161? /sbin/iptables -A INPUT -s 123.1.2.3 -i eth1 -p udp -m state --state NEW -m udp --dport 161 -j DROP # drop students 192.168.1.0/24 subnet to port 80 /sbin/iptables -A INPUT -s 192.168.1.0/24 -i eth1 -p tcp -m state --state NEW -m tcp --dport 80 -j DROP How do I view blocked ports rules? Use the iptables command: # /sbin/iptables -L -n -v # /sbin/iptables -L -n -v | grep port # /sbin/iptables -L -n -v | grep -i DROP # /sbin/iptables -L OUTPUT -n -v # /sbin/iptables -L INPUT -n -v

Sunday, 28 March 2021

Enable root login over SSH:

Enable root login over SSH: As root, edit the sshd_config file in /etc/ssh/sshd_config: nano /etc/ssh/sshd_config Add a line in the Authentication section of the file that says PermitRootLogin yes. This line may already exist and be commented out with a "#". In this case, remove the "#". # Authentication: #LoginGraceTime 2m PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #MaxSessions 10 Save the updated /etc/ssh/sshd_config file. Restart the SSH server: service sshd restart

Tuesday, 9 March 2021

mdb calculates ZFS related values and how those differ from ZFS ARC size

 


Applies to:

Solaris Operating System - Version 10 6/06 U2 and later
Information in this document applies to any platform.

Purpose

This document describes how mdb calculates ZFS related values and how those differ from ZFS ARC size so that users understand correctly the relationship between these two.

Details

ARC size reported by arcstats

arcstats kernel statistics reports the current ZFS ARC usage.

# kstat -n arcstats
module: zfs                             instance: 0     
name:   arcstats                        class:    misc
        buf_size                        37861488
        data_size                       7838309824
        l2_hdr_size                     0
        meta_used                       170464568
        other_size                      115650152
        prefetch_meta_size              16952928
        rawdata_size                    0
        size                            8008774392

(The output is cut for brevity.)

'size' is the amount of active data in the ARC and it can be broken down as follows.

Solaris 11.x prior to Solaris 11.3 SRU 13.4 and Solaris 10 without 150400-46/150401-46

size = meta_used + data_size;

Solaris 11.3 SRU 13.4 or later and Solaris 10 with 150400-46/150401-46 or later

size = data_size;


meta_used = buf_size + other_size + l2_hdr_size + rawdata_size + prefetch_meta_size;

buf_size: size of in-core data to manage ARC buffers.

other_size: size of in-core data to mange ZFS objects.

l2_hdr_size: size of in-core data to manage L2ARC.

rawdata_size: size of raw data used for persistent L2ARC. (Solaris 11.2.8 or later)

prefetch_meta_size: size of in-core data to manage prefetch. (Solaris 11.3 or later)

data_size: size of cached on-disk file data and on-disk meta data.

 

How ZFS ARC is allocated from kernel memory

The way ZFS ARC is allocated from kernel memory depends on Solaris versions.

Solaris 10, Solaris 11.0, Solaris 11.1

To cache on-disk file data, ARC is allocated from 'zio_data_buf_XXX' (XXX indicates cache unit size, such as '4096', '8192' etc.) kmem caches allocated from 'zfs_file_data_buf' virtual memory (vmem) arena.
To cache on-disk meta data, ARC is allocated from 'zio_buf_XXX' kmem caches allocated from 'kmem_default' vmem arena.
In-core data is allocated from other kmem caches, 'arc_buf_t', 'dmu_buf_impl_t', 'l2arc_buf_t', etc. allocated from 'kmem_default' vmem arena.
Also 'zio_data_buf_XXX' and 'zio_buf_XXX' are not used only to cache on-disk file and meta data but also used by ZFS IO routines not for ZFS ARC purpose.

Pages for 'zio_data_buf_XXX' are associated with the 'zvp' vnode and in the 'kzioseg' kernel segment.
Pages for 'zio_buf_XXX' and other caches are associated with the 'kvp', usual kernel vnode.

On Solaris 11.1 with SRU 3.4 or later, in addition to the above, 'zfs_file_data_lp_buf' vmem arena is used to allocate large pages.

Solaris 11.2

To cache on-disk file data, ARC is allocated from 'zio_data_buf_XXX' kmem caches allocated from 'zfs_file_data_buf' vmem arena.
To cache on-disk meta data, ARC is allocated from 'zio_buf_XXX' kmem caches allocated from 'zfs_metadta_buf' vmem arena.
In-core data is allocated from other kmem caches, 'arc_buf_t', 'dmu_buf_impl_t', 'l2arc_buf_t', 'zfetch_triggert_t', etc. allocated from 'kmem_default' vmem arena.
Also 'zio_data_buf_XXX' and 'zio_buf_XXX' are not used only to cache on-disk file and meta data but also used by ZFS IO routines not for ZFS ARC purpose.

Pages for both 'zio_data_buf_XXX' and 'zio_buf_XXX' are associated with the 'zvp' vnode and in the 'kzioseg' kernel segment.
Pages for other caches are associated with the 'kvp', usual kernel vnode.

Solaris 11.3 prior to SRU 21.5

The new kernel memory allocation mechanism, Kernel Object Manager (KOM) is introduced.
To cache on-disk file data, ARC is allocated from 'arc_data' kom class.
To cache on-disk meta data, ARC is allocated from 'arc_meta' kom class.
In-core data is allocated from other kmem caches, 'arc_buf_t', 'dmu_buf_impl_t', 'l2arc_buf_t', 'zfetch_triggert_t', etc. allocated from 'kmem_default' vmem arena.
Memory used by ZFS IO routines not for ZFS ARC purpose are allocated as 'kmem_alloc_XXX' from 'kmem_default' vmem arena.

'kzioseg' segment and 'zvp' vnode no longer exist.

Solaris 11.3 SRU 21.5 or later

To cache on-disk file data, ARC is allocated from 'arc_data' kom class.
To cache on-disk meta data, ARC is allocated from 'arc_meta' kom_class.

'kmem_default_zfs' vmem arena is introduced to account for kernel memory used by zfs not to cache on-disk data.

In-core data, 'arc_buf_t', 'dmu_buf_impl_t', 'l2arc_buf_t', 'zfetch_triggert_t', etc., are now allocated from 'kmem_default_zfs' vmem arena.
Memory used by ZFS IO routines not for ZFS ARC purpose are allocated as 'zio_buf_XXX' from 'kmem_default_zfs' vmem arena too.

 

ZFS information reported by ::memstat in mdb
::memstat reports ZFS related memory usage also, but it's not exactly the same as arcstats and its implementation depends on OS versions.

Solaris 10, Solaris 11.0, Solaris 11.1

> ::memstat
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                     540356              2110   13%
ZFS File Data              609140              2379   15%
Anon                        41590               162    1%
Exec and libs                5231                20    0%
Page cache                   2883                11    0%
Free (cachelist)           800042              3125   19%
Free (freelist)           2192512              8564   52%

Total                     4191754             16374
Physical                  4102251             16024

'ZFS File Data' shows the size of pages associated with the 'zvp', which is the size allocated from 'zio_data_buf_XXX' kmem caches.
It does not include on-disk meta data and in-core data. Also it contains some amount of data used by ZFS IO routines.

Solaris 11.2

> ::memstat
Page Summary                 Pages             Bytes  %Tot
----------------- ----------------  ----------------  ----
Kernel                      237329              1.8G   23%
Guest                            0                 0    0%
ZFS Metadata                 28989            226.4M    3%
ZFS File Data               699858              5.3G   67%
Anon                         41418            323.5M    4%
Exec and libs                 1366             10.6M    0%
Page cache                    4782             37.3M    0%
Free (cachelist)              1017              7.9M    0%
Free (freelist)              33817            264.1M    3%
Total                      1048576                8G

'ZFS File Data' shows the size allocated from 'zfs_file_data_buf' vmem arena. 'ZFS Metadata' shows the size of "pages associated with zvp" - 'ZFS File Data'.

Solaris 11.3 prior to SRU 17.5.0

> ::memstat
Page Summary                 Pages             Bytes  %Tot
----------------- ----------------  ----------------  ----
Kernel                      558607              4.2G    7%
ZFS Metadata                 27076            211.5M    0%
ZFS File Data              2743214             20.9G   33%
Anon                         68656            536.3M    1%
Exec and libs                 2067             16.1M    0%
Page cache                    7285             56.9M    0%
Free (cachelist)             21596            168.7M    0%
Free (freelist)            4927709             37.5G   59%
Total                      8372224             63.8G

> ::kom_class
ADDR             FLAGS NAME             RSS        MEM_TOTAL
4c066e91d80      -L-   arc_meta         211.5m     280m      
4c066e91c80      ---   arc_data         20.9g      20.9g 

'ZFS File Data' shows the size of KOM statistics of 'arc_data''ZFS Metadata' shows the size of KOM statistics of 'arc_meta'.

Solaris 11.3 with SRU 17.5 and without SRU 21.5

> ::memstat -v
Page Summary                            Pages             Bytes  %Tot
---------------------------- ----------------  ----------------  ----
Kernel                                 636916              4.8G    4%
Kernel (ZFS ARC excess)                 16053            125.4M    0%
Defdump prealloc                       291049              2.2G    2%
ZFS Metadata                           137434              1.0G    1%
ZFS File Data                         4244593             32.3G   25%
Anon                                   114975            898.2M    1%
Exec and libs                            2000             15.6M    0%
Page cache                              15548            121.4M    0%
Free (cachelist)                       253689              1.9G    2%
Free (freelist)                      11064959             84.4G   66%
Total                                16777216              128G

::memstat on Solaris 11.3 SRU 17.5 or later has '-v' option to show the details.

'ZFS File Data' and 'ZFS Metadata' shows the KOM stat same as before.

In addition, 'Kernel (ZFS ARC excess)' shows the wasted memory of the sum of 'ZFS File Data' and 'ZFS Metadata'.

KOM can keep allocated memory which is not actually used at the moment, which is considered wasted.

Solaris 11.3 SRU 21.5 or later

> ::memstat -v
Page Summary                            Pages             Bytes  %Tot
---------------------------- ----------------  ----------------  ----
Kernel                                 671736              2.5G    6%
Kernel (ZFS ARC excess)                 21159             82.6M    0%
Defdump prealloc                       361273              1.3G    3%
ZFS Kernel Data                        131699            514.4M    1%
ZFS Metadata                            42962            167.8M    0%
ZFS File Data                         8857479             33.7G   84%
Anon                                    99066            386.9M    1%
Exec and libs                            2050              8.0M    0%
Page cache                               9265             36.1M    0%
Free (cachelist)                        14663             57.2M    0%
Free (freelist)                        273905              1.0G    3%
Total                                10485257             39.9G

 In addition to the information prior to Solarsi 11.3 SRU 21.5, 'ZFS Kernel Data' shows the size allocated from 'kmem_default_zfs' arena (and its overhead).

Solaris 11.4 or later

> ::memstat -v
Usage Type/Subtype                      Pages    Bytes  %Tot  %Tot/%Subt
---------------------------- ---------------- -------- ----- -----------
Kernel                                3669091    13.9g  7.2%
  Regular Kernel                      2602037     9.9g        5.1%/70.9%
  ZFS ARC Fragmentation                 14515    56.6m        0.0%/ 0.3%
  Defdump prealloc                    1052539     4.0g        2.0%/28.6%
ZFS                                  28359638   108.1g 56.3%
  ZFS Metadata                         116083   453.4m        0.2%/ 0.4%
  ZFS Data                           27959629   106.6g       55.5%/98.5%
  ZFS Kernel Data                      283926     1.0g        0.5%/ 1.0%
User/Anon                              201462   786.9m  0.4%
Exec and libs                            3062    11.9m  0.0%
Page Cache                              29372   114.7m  0.0%
Free (cachelist)                          944     3.6m  0.0%
Free                                 18033911    68.7g 35.8%
Total                                50297480   191.8g  100%

 'ZFS ARC Fragmentation' under 'Kernel' shows the wasted memory.

 

Why values reported by ::memstat is different from size reported by arcstats?

There are a few factors.

ARC size includes cached on-disk file data, cached on-disk meta data, and various in-core data. But ::memstat does not report each of them. Prior to Solaris 11.2, only 'ZFS File Data' is reported.
Even on Solaris 11.2 and 11.3, in-core data is not reported. Also the accounting by arcstats and ::memstat does not completely match.

::memstat on Solaris 11.3 SRU 21.5 or later reports in-core data as 'ZFS Kernel Data', though in-core data counted by arcstats and by ::memstat are not exactly the same.

Another factor is wasted memory in kmem caches.
Consider a possible scenario here: customer ran a workload that was largely 128K blocksize based. This resulted in filling up the ARC cache with say X GB of 128K blocks. The customer then switched to a workload that was 8K based. The ARC cache now filled up Y GB of 8K blocks (the 128K blocks are evicted). When the 128K blocks are evicted from the ARC cache, they are returned to the 'zio_data_buf_131072' cache, where they will stay (unused by the ARC) until either re-allocated or "reaped" by the VM system.

Under such a condition, 'ZFS File Data' shown by ::memstat can be much higher than the ARC size.
Especially, from Solaris 11.1 with SRU 3.4 through Solaris 11.1 with SRU 21.4, large pages are used by default and the situation can be worse.

::memstat reports such waste as 'Kernel (ZFS ARC excess)' on Solaris 11.3 SRU 17.5 or later, or 'ZFS ARC Fragmentation' on Solaris 11.4 or later.

Also it could happen 'ZFS File Data' is higher than the ARC size even though 'ZFS ARC excess / ZFS ARC Fragmentation' is not high.
In this case, the ARC memory is freed but still have KOM objects associated.

As discussed above, it is clear that reported values by ::memstat do not have to match with the value of ZFS ARC size.  It is not an issue if ::memstat values are more or less than ZFS ARC size.

 

-------------

 


Click to add to FavoritesTo BottomTo Bottom

Applies to:

Solaris Operating System - Version 8.0 to 11.4 [Release 8.0 to 11.0]
All Platforms
*** Checked for currency and updated for Solaris 11.2 11-March-2015 ***


Goal

This document is intended to give hints, where to look for in checking and troubleshooting memory usage.
In principle, investigation of memory usage is split in checking usage of kernel memory and user memory.

Please be aware that in case of a memory-usage problem on a system, corrective actions usually requires deep knowledge and must be performed with great care.

Solution

General System Practices is to keep system up-to-date with latest Solaris releases and patches

First, you need to check  how much Memory is used in Kernel and how much is used in User Memory. This is important to decide, which further troubleshooting steps are required.

A very useful mdb dcmd is '::memstat' ( this command can take several minutes to complete )
For more information on using the modular debugger, see the Oracle Solaris Modular Debugger Guide.
Solaris[TM] 9 Operating System or greater only !  Format varies with OS release.  This example is from Solaris 11.2

# echo "::memstat" | mdb -k
Page Summary                 Pages             Bytes  %Tot
----------------- ----------------  ----------------  ----
Kernel                      585584              4.4G   14%
Defdump prealloc            204802              1.5G    5%
Guest                            0                 0    0%
ZFS Metadata                 21436            167.4M    0%
ZFS File Data               342833              2.6G    8%
Anon                         56636            442.4M    1%
Exec and libs                 1131              8.8M    0%
Page cache                    4339             33.8M    0%
Free (cachelist)              8011             62.5M    0%
Free (freelist)            2969532             22.6G   71%
Total                      4194304               32G



User memory usage :  print out processes using most USER - memory
% prstat -s size # sorted by userland virtual memory consumption
% prstat -s rss # sorted by userland physical memory consumption

% prstat -s rss
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
  4051 user1     297M  258M sleep   59    0   1:35:05 0.0% mysqld/10
 26286 user2     229M  180M sleep   59    0   0:05:07 0.0% java/53
 27101 user2     237M  150M sleep   59    0   0:02:21 0.0% soffice.bin/5
 23335 user2     193M  135M sleep   59    0   0:12:33 0.0% firefox-bin/10
  3727 noaccess  192M  131M sleep   59    0   0:36:22 0.0% java/18
 22751 root      165M  131M sleep   59    0   1:13:12 0.0% java/46
  1448 noaccess  192M  108M sleep   59    0   0:34:47 0.0% java/18
 10115 root      129M   82M sleep   59    0   0:31:29 0.0% java/41
 20274 root      136M   77M stop    59    0   0:04:08 0.0% java/25
  3397 root      138M   76M sleep   59    0   0:12:42 0.0% java/37
 12949 pgsql      81M   70M sleep   59    0   0:09:36 0.0% postgres/1
 12945 pgsql      80M   70M sleep   59    0   0:00:05 0.0% postgres/1



 User Memory Usage : shows Shared Memory and Semaphores:

% ipcs -a

IPC status from
T  ID     KEY        MODE     OWNER   GROUP  CREATOR  CGROUP CBYTES  QNUM     QBYTES  LSPID  LRPID   STIME    RTIME    CTIME
Message Queues:
q  0  0x55460272 -Rrw-rw----   root    root     root    root    0       0     4194304  1390  18941  14:12:20  14:12:21  10:23:32
q  1  0x41460272 --rw-rw----   root    root     root    root    0       0     4194304  5914   1390   8:03:34   8:03:34  10:23:39
q  2  0x4b460272 --rw-rw----   root    root     root    root    0       0     4194304     0      0  no-entry  no-entry  10:23:39

T  ID      KEY       MODE      OWNER     GROUP CREATOR    CGROUP    NATTCH       SEGSZ  CPID   LPID     ATIME     DTIME    CTIME
Shared Memory:
m  0  0x50000b3f --rw-r--r--   root      root     root      root         1           4   738   738   18:50:36  18:50:36  18:50:36
m  1  0x52574801 --rw-rw----   root    oracle     root    oracle        35  1693450240  2049  26495  10:30:00  10:30:00  18:51:13
m  2  0x52574802 --rw-rw----   root    oracle     root    oracle        35  1258291200  2049  26495  10:30:00  10:30:00  18:51:16
m  3  0x52594801 --rw-rw----   root    oracle     root    oracle        12   241172480  2098  14328   7:58:33   7:58:33  18:51:27
m  4  0x52594802 --rw-rw----   root    oracle     root    oracle        12    78643200  2098  14329   7:58:32   7:58:33  18:51:27
m  5  0x52584801 --rw-rw----   root    oracle     root    oracle        13   125829120  2125  27492   1:36:12   1:36:12  18:51:34
m  6  0x52584802 --rw-rw----   root    oracle     root    oracle        13   268435456  2125  27487   1:36:10   1:36:11  18:51:34
m  7  0x525a4801 --rw-rw----   root    oracle     root    oracle        15   912261120  2160  27472   1:36:09   1:36:09  18:51:40
m  8  0x525a4802 --rw-rw----   root    oracle     root    oracle        15   268435456  2160  27467   1:36:08   1:36:09  18:51:42
m 8201 0x4d2     --rw-rw-rw-   root      root     root      root         0       32008  1528   1543  10:26:03  10:26:04  10:25:53

T  ID  KEY       MODE     OWNER       GROUP       CREATOR        CGROUP         NSEMS     OTIME    CTIME
Semaphores:
s  0   0x1   --ra-ra-ra-   root        root          root         root              1     16:17:35  18:50:33
s  1     0   --ra-ra----   root       oracle         root         oracle           36     10:33:28  18:51:17
s  2     0   --ra-ra----   root       oracle         root         oracle           13     10:33:28  18:51:27
s  3     0   --ra-ra----   root       oracle         root         oracle           14     10:33:28  18:51:34
s  4     0   --ra-ra----   root       oracle         root         oracle           16     10:33:27  18:51:42
s  5 0x4d2   --ra-ra-ra-   root       root           root         root               1    no-entry  10:25:53
s  6 0x4d3   --ra-ra-ra-   root       root           root         root               1    no-entry  10:25:53




User Memory Usage : lists User Memory usage of all processes ( except PID 0,2,3 )

# pmap -x /proc/* > /var/tmp/pmap-x
short list of total usage of these processes

% egrep "[0-9]:|^total" /var/tmp/pmap-x
     1:   /sbin/init
total Kb 2336 2080  128 -
1006:  rlogin cores4
total Kb 2216 1696    80 -
1007:  rlogin cores4
total Kb 2216 1696  104 -
  115:  /usr/sbin/nscd
total Kb 4208 3784 1704 -
-- snip --




User Memory Usage : check the usage of /tmp

% df -kl /tmp
Filesystem kbytes        used        avail capacity  Mounted on 
swap        1355552    2072 1353480        1%      /tmp

print the biggest 10 files and dirs in /tmp

% du -akd /tmp/ | sort -n | tail -10
288     /tmp/SUNWut
328     /tmp/log
576     /tmp/ips2
584     /tmp/explo
608     /tmp/ipso
3408    /tmp/sshd-truss.out
17992   /tmp/truss.p
22624   /tmp/js
49208   /tmp



 
User Memory Usage : Overall Memory usage on system

% vmstat -p 3
     memory           page          executable      anonymous      filesystem
   swap  free     re  mf  fr  de  sr  epi  epo  epf  api  apo  apf  fpi  fpo  fpf
19680912 27487976 21  94   0   0   0    0    0    0    0    0    0   14    0    0
 3577608 11959480  0  20   0   0   0    0    0    0    0    0    0    0    0    0
 3577328 11959240  0   5   0   0   0    0    0    0    0    0    0    0    0    0
 3577328 11959112 38 207   0   0   0    0    0    0    0    0    0    0    0    0
 3577280 11958944  0   1   0   0   0    0    0    0    0    0    0    0    0    0

 

scanrate 'sr'  should be 0  or near zero



 
User Memory Usage : Swap usage

% swap -l
swapfile              dev    swaplo  blocks      free
/dev/dsk/c0t0d0s1   32,25        16  1946032  1946032

% swap -s
total: 399400k bytes allocated + 18152k reserved = 417552k used, 1355480k available




common kernel statistics

print out all kernel statistics in a parse'able format

% kstat -p > /var/tmp/kstat-p



kernel memory statistics:

% kstat -p -c kmem_cache
% kstat -p -m vmem
% kstat -p -c vmem
% kstat -p | egrep zfs_file_data_buf | egrep mem_total



alternatively to kstat you can get kernel memory usage with kmastat
prints kmastat buffers

# echo "::kmastat" | mdb -k > /var/tmp/kmastat
% more /var/tmp/kmastat
    cache                     buf    buf    buf     memory     alloc  alloc
    name                     size in use   total    in use   succeed  fail
 ------------------------- ------ ------  ------ --------- --------- -----
  kmem_magazine_1              16    470     508      8192       470     0
  kmem_magazine_3              32    970    1016     32768      1164     0
  kmem_magazine_7              64   1690    1778    114688      1715     0


Look for the highest numbers in column "memory in use" and for any numbers higher than '0' in column "alloc fail"

 

ZFS File Data:
    Keep system up-to-date with latest Solaris releases and patches
    Size memory requirements to actual system workload

        With a known application memory footprint, such as for a database application, you might cap the ARC size so that the
        application will not need to reclaim its necessary memory from the ZFS cache.
        Consider de-duplication memory requirements
        Identify ZFS memory usage with the following command:

# mdb -k
Loading modules: [ unix genunix specfs dtrace zfs scsi_vhci sd mpt mac px ldc ip
 hook neti ds arp usba kssl sockfs random mdesc idm nfs cpc crypto fcip fctl ufs
 logindmux ptm sppp ipc ]
> ::memstat
Page Summary                 Pages             Bytes  %Tot
----------------- ----------------  ----------------  ----
Kernel                      261969              1.9G    6%
Guest                            0                 0    0%
ZFS Metadata                 13915            108.7M    0%
ZFS File Data               111955            874.6M    3%
Anon                         52339            408.8M    1%
Exec and libs                 1308             10.2M    0%
Page cache                    5932             46.3M    0%
Free (cachelist)             16460            128.5M    0%
Free (freelist)            3701754             28.2G   89%
Total                      4165632             31.7G
> $q

In case the amount of ZFS File Data is too high on the system, you might consider how to limit how much memory ZFS can consume.

For Solaris revisions prior to Solaris 11, the only way accomplish this is to limit the ARC cache
by setting zfs:zfs_arc_max in /etc/system
set zfs:zfs_arc_max = [size]
i.e. limit the cache to 1 GB in size
set zfs:zfs_arc_max = 1073741824

Please check the following documents to check/limit the ARC
How to Understand "ZFS File Data" Value by mdb and ZFS ARC Size. (Doc ID 1430323.1)
Oracle Solaris Tunable Parameters Reference Manual

Starting at Solaris 11, a second method, reserving memory for applications, may be used to prevent ZFS from using too much memory.

The entry in /etc/system looks like this:

set user_reserve_hint_pct=60

 

configure /dev/shm size of Linux

 

How to configure /dev/shm size of Linux?

To change the configuration for /dev/shm, add one line to /etc/fstab as follows.

tmpfs /dev/shm tmpfs defaults,size=8g 0 0

Here, the /dev/shm size is configured to be 8GB (make sure you have enough physical memory installed).

It will take effect next time Linux reboot. If you would like to make it take effect immediately, run

 

========

For many facilities there are system calls, others are hidden behind netlink interfaces, and even others are exposed via virtual file systems such as /proc or /sys. These file systems are programming interfaces, they are not actually backed by real, persistent storage. They simply use the file system interface of the kernel as interface to various unrelated mechanisms.


Now by default systemd assigns a certain part of your physical memory to these partitions as a threshold. But what if your requirement requires you to change tmpfs partition size?

For some of the tmpfs partitions, you can change the threshold size by using fstab. While for other partitions like (/run/user/) which are created runtime, you cannot use fstab to change tmpfs partition size for such runtime directories.

Below are the list of tmpfs partitions available in RHEL 7

Filesystem Size Used Avail Use% Mounted on
tmpfs      187G    0  187G   0% /dev/shm
tmpfs      187G  41M  187G   1%  /run
tmpfs      187G    0  187G   0% /sys/fs/cgroup
tmpfs       38G    0   38G   0% /run/user/1710
tmpfs       38G    0   38G   0% /run/user/0
NOTE:
You may notice that /etc/fstab does not contains entries for these tmpfs partitions but still df -h will show these partitions.

 

Change tmpfs partition size for /dev/shm

If an application is POSIX compliant or it uses GLIBC (2.2 and above) on a Red Hat Enterprise Linux system, it will usually use the /dev/shm for shared memory (shm_open, shm_unlink). /dev/shm is a temporary filesystem (tmpfs) which is mounted from /etc/fstab. Hence the standard options like "size" supported for tmpfs can be used to increase or decrease the size of tmpfs on /dev/shm (by default it is half of available system RAM).


For example, to set the size of /dev/shm to 2GiB, change the following line in /etc/fstab:

Default:

none     /dev/shm       tmpfs   defaults                0 0

To:

none     /dev/shm       tmpfs   defaults,size=2G        0 0

For the changes to take effect immediately remount /dev/shm:

# mount -o remount /dev/shm
NOTE:
A mount -o remount to shrink a tmpfs will succeed if there are not any blocks or inodes allocated within the new limit of the smaller tmpfs size. It is not possible to predict or control this, however a remount simply will not work if it cannot be done. In that case, stop all processes using tmpfs, unmount it, and remount it using the new size.

Lastly validate the new size

# df -h /dev/shm
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           2.0G     0  2.0G   0% /dev/shm

 

Change tmpfs partition size for /run

/run is a filesystem which is used by applications the same way /var/run was used in previous versions of RHEL. Now /var/run is a symlink to /run filesystem. Previously early boot programs used to place runtime data in /dev under numerous hidden dot directories. The reason they used directories in /dev was because it was known to be available from very early time during machine boot process. Because /var/run was available very late during boot, as /var might reside on a separate file system, directory /run was implemented.

 

By default you may not find any /etc/fstab entry for /run, so you can add below line

none     /run          tmpfs       defaults,size=600M        0 0

For the changes to take effect immediately remount /run:

# mount -o remount /run

lastly validate the new size

# df -h /run
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           600M  9.6M  591M   2% /run

 

Change tmpfs partition size for /run/user/$UID

/run/user/$UID is a filesystem used by pam_systemd to store files used by running processes for that user. In previous releases these files were typically stored in /tmp as it was the only location specified by the FHS which is local, and writeable by all users. However using /tmp can causes issues because it is writeable by anyone and thus access control was challenging. Using /run/user/$UID fixes the issue because it is only accessible by the target user.

IMPORTANT NOTE:
You cannot change tmpfs partition size for /run/user/$UID using /etc/fstab.

tmps partition size for /run/user/$UID is taken based on RuntimeDirectorySize value from /etc/systemd/logind.conf

# grep -i runtime /etc/systemd/logind.conf
RuntimeDirectorySize=10%

By default the default threshold for these runtime directory is 10% of the total physical memory.

From the man page of logind.conf

RuntimeDirectorySize=
      Sets the size limit on the $XDG_RUNTIME_DIR runtime directory for each user who logs in. Takes a size in bytes, optionally suffixed with the usual K, G, M, and T suffixes, to the base 1024 (IEC). Alternatively, a numerical percentage suffixed by "%" may be specified, which sets the size limit relative to the amount of physical RAM. Defaults to 10%. Note that this size is a safety limit only. As each runtime directory is a tmpfs file system, it will only consume as much memory as is needed.

Modify this variable to your required value, for example I have provided threshold of 100M

# grep -i runtime /etc/systemd/logind.conf
RuntimeDirectorySize=100M

Next restart the systemd-logind service

IMPORTANT NOTE:
A reboot of the node is required to activate the changes.

 

Change tmpfs partition size for /sys/fs/cgroup

/sys/fs/cgroup is an interface through which Control Groups can be accessed. By default there may or may not be /etc/fstab content for /sys/fs/cgroup so add a new entry

Current value for /sys/fs/cgroup

# df -h /sys/fs/cgroup
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            63G     0   63G   0% /sys/fs/cgroup

Add below line in your /etc/fstab to change the threshold to 2GB

none          /sys/fs/cgroup          tmpfs       defaults,size=2G         0 0

Remount the partition /sys/fs/cgroup

# mount -o remount /sys/fs/cgroup

Lastly validate the updated changes

# df -h /sys/fs/cgroup
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup

 

 

Monday, 8 March 2021

How to Check and Analyze Solaris Memory Usage

Solaris Operating System - Version 8.0 to 11.4 [Release 8.0 to 11.0] All Platforms *** Checked for currency and updated for Solaris 11.2 11-March-2015 *** Goal This document is intended to give hints, where to look for in checking and troubleshooting memory usage. In principle, investigation of memory usage is split in checking usage of kernel memory and user memory. Please be aware that in case of a memory-usage problem on a system, corrective actions usually requires deep knowledge and must be performed with great care. Solution General System Practices is to keep system up-to-date with latest Solaris releases and patches First, you need to check how much Memory is used in Kernel and how much is used in User Memory. This is important to decide, which further troubleshooting steps are required. A very useful mdb dcmd is '::memstat' ( this command can take several minutes to complete ) For more information on using the modular debugger, see the Oracle Solaris Modular Debugger Guide. Solaris[TM] 9 Operating System or greater only ! Format varies with OS release. This example is from Solaris 11.2 # echo "::memstat" | mdb -k Page Summary Pages Bytes %Tot ----------------- ---------------- ---------------- ---- Kernel 585584 4.4G 14% Defdump prealloc 204802 1.5G 5% Guest 0 0 0% ZFS Metadata 21436 167.4M 0% ZFS File Data 342833 2.6G 8% Anon 56636 442.4M 1% Exec and libs 1131 8.8M 0% Page cache 4339 33.8M 0% Free (cachelist) 8011 62.5M 0% Free (freelist) 2969532 22.6G 71% Total 4194304 32G User memory usage : print out processes using most USER - memory % prstat -s size # sorted by userland virtual memory consumption % prstat -s rss # sorted by userland physical memory consumption % prstat -s rss PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 4051 user1 297M 258M sleep 59 0 1:35:05 0.0% mysqld/10 26286 user2 229M 180M sleep 59 0 0:05:07 0.0% java/53 27101 user2 237M 150M sleep 59 0 0:02:21 0.0% soffice.bin/5 23335 user2 193M 135M sleep 59 0 0:12:33 0.0% firefox-bin/10 3727 noaccess 192M 131M sleep 59 0 0:36:22 0.0% java/18 22751 root 165M 131M sleep 59 0 1:13:12 0.0% java/46 1448 noaccess 192M 108M sleep 59 0 0:34:47 0.0% java/18 10115 root 129M 82M sleep 59 0 0:31:29 0.0% java/41 20274 root 136M 77M stop 59 0 0:04:08 0.0% java/25 3397 root 138M 76M sleep 59 0 0:12:42 0.0% java/37 12949 pgsql 81M 70M sleep 59 0 0:09:36 0.0% postgres/1 12945 pgsql 80M 70M sleep 59 0 0:00:05 0.0% postgres/1 User Memory Usage : shows Shared Memory and Semaphores: % ipcs -a IPC status from T ID KEY MODE OWNER GROUP CREATOR CGROUP CBYTES QNUM QBYTES LSPID LRPID STIME RTIME CTIME Message Queues: q 0 0x55460272 -Rrw-rw---- root root root root 0 0 4194304 1390 18941 14:12:20 14:12:21 10:23:32 q 1 0x41460272 --rw-rw---- root root root root 0 0 4194304 5914 1390 8:03:34 8:03:34 10:23:39 q 2 0x4b460272 --rw-rw---- root root root root 0 0 4194304 0 0 no-entry no-entry 10:23:39 T ID KEY MODE OWNER GROUP CREATOR CGROUP NATTCH SEGSZ CPID LPID ATIME DTIME CTIME Shared Memory: m 0 0x50000b3f --rw-r--r-- root root root root 1 4 738 738 18:50:36 18:50:36 18:50:36 m 1 0x52574801 --rw-rw---- root oracle root oracle 35 1693450240 2049 26495 10:30:00 10:30:00 18:51:13 m 2 0x52574802 --rw-rw---- root oracle root oracle 35 1258291200 2049 26495 10:30:00 10:30:00 18:51:16 m 3 0x52594801 --rw-rw---- root oracle root oracle 12 241172480 2098 14328 7:58:33 7:58:33 18:51:27 m 4 0x52594802 --rw-rw---- root oracle root oracle 12 78643200 2098 14329 7:58:32 7:58:33 18:51:27 m 5 0x52584801 --rw-rw---- root oracle root oracle 13 125829120 2125 27492 1:36:12 1:36:12 18:51:34 m 6 0x52584802 --rw-rw---- root oracle root oracle 13 268435456 2125 27487 1:36:10 1:36:11 18:51:34 m 7 0x525a4801 --rw-rw---- root oracle root oracle 15 912261120 2160 27472 1:36:09 1:36:09 18:51:40 m 8 0x525a4802 --rw-rw---- root oracle root oracle 15 268435456 2160 27467 1:36:08 1:36:09 18:51:42 m 8201 0x4d2 --rw-rw-rw- root root root root 0 32008 1528 1543 10:26:03 10:26:04 10:25:53 T ID KEY MODE OWNER GROUP CREATOR CGROUP NSEMS OTIME CTIME Semaphores: s 0 0x1 --ra-ra-ra- root root root root 1 16:17:35 18:50:33 s 1 0 --ra-ra---- root oracle root oracle 36 10:33:28 18:51:17 s 2 0 --ra-ra---- root oracle root oracle 13 10:33:28 18:51:27 s 3 0 --ra-ra---- root oracle root oracle 14 10:33:28 18:51:34 s 4 0 --ra-ra---- root oracle root oracle 16 10:33:27 18:51:42 s 5 0x4d2 --ra-ra-ra- root root root root 1 no-entry 10:25:53 s 6 0x4d3 --ra-ra-ra- root root root root 1 no-entry 10:25:53 User Memory Usage : lists User Memory usage of all processes ( except PID 0,2,3 ) # pmap -x /proc/* > /var/tmp/pmap-x short list of total usage of these processes % egrep "[0-9]:|^total" /var/tmp/pmap-x 1: /sbin/init total Kb 2336 2080 128 - 1006: rlogin cores4 total Kb 2216 1696 80 - 1007: rlogin cores4 total Kb 2216 1696 104 - 115: /usr/sbin/nscd total Kb 4208 3784 1704 - -- snip -- User Memory Usage : check the usage of /tmp % df -kl /tmp Filesystem kbytes used avail capacity Mounted on swap 1355552 2072 1353480 1% /tmp print the biggest 10 files and dirs in /tmp % du -akd /tmp/ | sort -n | tail -10 288 /tmp/SUNWut 328 /tmp/log 576 /tmp/ips2 584 /tmp/explo 608 /tmp/ipso 3408 /tmp/sshd-truss.out 17992 /tmp/truss.p 22624 /tmp/js 49208 /tmp User Memory Usage : Overall Memory usage on system % vmstat -p 3 memory page executable anonymous filesystem swap free re mf fr de sr epi epo epf api apo apf fpi fpo fpf 19680912 27487976 21 94 0 0 0 0 0 0 0 0 0 14 0 0 3577608 11959480 0 20 0 0 0 0 0 0 0 0 0 0 0 0 3577328 11959240 0 5 0 0 0 0 0 0 0 0 0 0 0 0 3577328 11959112 38 207 0 0 0 0 0 0 0 0 0 0 0 0 3577280 11958944 0 1 0 0 0 0 0 0 0 0 0 0 0 0 scanrate 'sr' should be 0 or near zero User Memory Usage : Swap usage % swap -l swapfile dev swaplo blocks free /dev/dsk/c0t0d0s1 32,25 16 1946032 1946032 % swap -s total: 399400k bytes allocated + 18152k reserved = 417552k used, 1355480k available common kernel statistics print out all kernel statistics in a parse'able format % kstat -p > /var/tmp/kstat-p kernel memory statistics: % kstat -p -c kmem_cache % kstat -p -m vmem % kstat -p -c vmem % kstat -p | egrep zfs_file_data_buf | egrep mem_total alternatively to kstat you can get kernel memory usage with kmastat prints kmastat buffers # echo "::kmastat" | mdb -k > /var/tmp/kmastat % more /var/tmp/kmastat cache buf buf buf memory alloc alloc name size in use total in use succeed fail ------------------------- ------ ------ ------ --------- --------- ----- kmem_magazine_1 16 470 508 8192 470 0 kmem_magazine_3 32 970 1016 32768 1164 0 kmem_magazine_7 64 1690 1778 114688 1715 0 Look for the highest numbers in column "memory in use" and for any numbers higher than '0' in column "alloc fail" ZFS File Data: Keep system up-to-date with latest Solaris releases and patches Size memory requirements to actual system workload With a known application memory footprint, such as for a database application, you might cap the ARC size so that the application will not need to reclaim its necessary memory from the ZFS cache. Consider de-duplication memory requirements Identify ZFS memory usage with the following command: # mdb -k Loading modules: [ unix genunix specfs dtrace zfs scsi_vhci sd mpt mac px ldc ip hook neti ds arp usba kssl sockfs random mdesc idm nfs cpc crypto fcip fctl ufs logindmux ptm sppp ipc ] > ::memstat Page Summary Pages Bytes %Tot ----------------- ---------------- ---------------- ---- Kernel 261969 1.9G 6% Guest 0 0 0% ZFS Metadata 13915 108.7M 0% ZFS File Data 111955 874.6M 3% Anon 52339 408.8M 1% Exec and libs 1308 10.2M 0% Page cache 5932 46.3M 0% Free (cachelist) 16460 128.5M 0% Free (freelist) 3701754 28.2G 89% Total 4165632 31.7G > $q In case the amount of ZFS File Data is too high on the system, you might consider how to limit how much memory ZFS can consume. For Solaris revisions prior to Solaris 11, the only way accomplish this is to limit the ARC cache by setting zfs:zfs_arc_max in /etc/system set zfs:zfs_arc_max = [size] i.e. limit the cache to 1 GB in size set zfs:zfs_arc_max = 1073741824 Please check the following documents to check/limit the ARC How to Understand "ZFS File Data" Value by mdb and ZFS ARC Size. (Doc ID 1430323.1) Oracle Solaris Tunable Parameters Reference Manual Starting at Solaris 11, a second method, reserving memory for applications, may be used to prevent ZFS from using too much memory. The entry in /etc/system looks like this: set user_reserve_hint_pct=60 ARC size reported by arcstats arcstats kernel statistics reports the current ZFS ARC usage. # kstat -n arcstats module: zfs instance: 0 name: arcstats class: misc buf_size 37861488 data_size 7838309824 l2_hdr_size 0 meta_used 170464568 other_size 115650152 prefetch_meta_size 16952928 rawdata_size 0 size 8008774392 (The output is cut for brevity.) 'size' is the amount of active data in the ARC and it can be broken down as follows. Solaris 11.x prior to Solaris 11.3 SRU 13.4 and Solaris 10 without 150400-46/150401-46 size = meta_used + data_size; Solaris 11.3 SRU 13.4 or later and Solaris 10 with 150400-46/150401-46 or later size = data_size; meta_used = buf_size + other_size + l2_hdr_size + rawdata_size + prefetch_meta_size; buf_size: size of in-core data to manage ARC buffers. other_size: size of in-core data to mange ZFS objects. l2_hdr_size: size of in-core data to manage L2ARC. rawdata_size: size of raw data used for persistent L2ARC. (Solaris 11.2.8 or later) prefetch_meta_size: size of in-core data to manage prefetch. (Solaris 11.3 or later) data_size: size of cached on-disk file data and on-disk meta data. How ZFS ARC is allocated from kernel memory The way ZFS ARC is allocated from kernel memory depends on Solaris versions. Solaris 10, Solaris 11.0, Solaris 11.1 To cache on-disk file data, ARC is allocated from 'zio_data_buf_XXX' (XXX indicates cache unit size, such as '4096', '8192' etc.) kmem caches allocated from 'zfs_file_data_buf' virtual memory (vmem) arena. To cache on-disk meta data, ARC is allocated from 'zio_buf_XXX' kmem caches allocated from 'kmem_default' vmem arena. In-core data is allocated from other kmem caches, 'arc_buf_t', 'dmu_buf_impl_t', 'l2arc_buf_t', etc. allocated from 'kmem_default' vmem arena. Also 'zio_data_buf_XXX' and 'zio_buf_XXX' are not used only to cache on-disk file and meta data but also used by ZFS IO routines not for ZFS ARC purpose. Pages for 'zio_data_buf_XXX' are associated with the 'zvp' vnode and in the 'kzioseg' kernel segment. Pages for 'zio_buf_XXX' and other caches are associated with the 'kvp', usual kernel vnode. On Solaris 11.1 with SRU 3.4 or later, in addition to the above, 'zfs_file_data_lp_buf' vmem arena is used to allocate large pages. Solaris 11.2 To cache on-disk file data, ARC is allocated from 'zio_data_buf_XXX' kmem caches allocated from 'zfs_file_data_buf' vmem arena. To cache on-disk meta data, ARC is allocated from 'zio_buf_XXX' kmem caches allocated from 'zfs_metadta_buf' vmem arena. In-core data is allocated from other kmem caches, 'arc_buf_t', 'dmu_buf_impl_t', 'l2arc_buf_t', 'zfetch_triggert_t', etc. allocated from 'kmem_default' vmem arena. Also 'zio_data_buf_XXX' and 'zio_buf_XXX' are not used only to cache on-disk file and meta data but also used by ZFS IO routines not for ZFS ARC purpose. Pages for both 'zio_data_buf_XXX' and 'zio_buf_XXX' are associated with the 'zvp' vnode and in the 'kzioseg' kernel segment. Pages for other caches are associated with the 'kvp', usual kernel vnode. Solaris 11.3 prior to SRU 21.5 The new kernel memory allocation mechanism, Kernel Object Manager (KOM) is introduced. To cache on-disk file data, ARC is allocated from 'arc_data' kom class. To cache on-disk meta data, ARC is allocated from 'arc_meta' kom class. In-core data is allocated from other kmem caches, 'arc_buf_t', 'dmu_buf_impl_t', 'l2arc_buf_t', 'zfetch_triggert_t', etc. allocated from 'kmem_default' vmem arena. Memory used by ZFS IO routines not for ZFS ARC purpose are allocated as 'kmem_alloc_XXX' from 'kmem_default' vmem arena. 'kzioseg' segment and 'zvp' vnode no longer exist. Solaris 11.3 SRU 21.5 or later To cache on-disk file data, ARC is allocated from 'arc_data' kom class. To cache on-disk meta data, ARC is allocated from 'arc_meta' kom_class. 'kmem_default_zfs' vmem arena is introduced to account for kernel memory used by zfs not to cache on-disk data. In-core data, 'arc_buf_t', 'dmu_buf_impl_t', 'l2arc_buf_t', 'zfetch_triggert_t', etc., are now allocated from 'kmem_default_zfs' vmem arena. Memory used by ZFS IO routines not for ZFS ARC purpose are allocated as 'zio_buf_XXX' from 'kmem_default_zfs' vmem arena too. ZFS information reported by ::memstat in mdb ::memstat reports ZFS related memory usage also, but it's not exactly the same as arcstats and its implementation depends on OS versions. Solaris 10, Solaris 11.0, Solaris 11.1 > ::memstat Page Summary Pages MB %Tot ------------ ---------------- ---------------- ---- Kernel 540356 2110 13% ZFS File Data 609140 2379 15% Anon 41590 162 1% Exec and libs 5231 20 0% Page cache 2883 11 0% Free (cachelist) 800042 3125 19% Free (freelist) 2192512 8564 52% Total 4191754 16374 Physical 4102251 16024 'ZFS File Data' shows the size of pages associated with the 'zvp', which is the size allocated from 'zio_data_buf_XXX' kmem caches. It does not include on-disk meta data and in-core data. Also it contains some amount of data used by ZFS IO routines. Solaris 11.2 > ::memstat Page Summary Pages Bytes %Tot ----------------- ---------------- ---------------- ---- Kernel 237329 1.8G 23% Guest 0 0 0% ZFS Metadata 28989 226.4M 3% ZFS File Data 699858 5.3G 67% Anon 41418 323.5M 4% Exec and libs 1366 10.6M 0% Page cache 4782 37.3M 0% Free (cachelist) 1017 7.9M 0% Free (freelist) 33817 264.1M 3% Total 1048576 8G 'ZFS File Data' shows the size allocated from 'zfs_file_data_buf' vmem arena. 'ZFS Metadata' shows the size of "pages associated with zvp" - 'ZFS File Data'. Solaris 11.3 prior to SRU 17.5.0 > ::memstat Page Summary Pages Bytes %Tot ----------------- ---------------- ---------------- ---- Kernel 558607 4.2G 7% ZFS Metadata 27076 211.5M 0% ZFS File Data 2743214 20.9G 33% Anon 68656 536.3M 1% Exec and libs 2067 16.1M 0% Page cache 7285 56.9M 0% Free (cachelist) 21596 168.7M 0% Free (freelist) 4927709 37.5G 59% Total 8372224 63.8G > ::kom_class ADDR FLAGS NAME RSS MEM_TOTAL 4c066e91d80 -L- arc_meta 211.5m 280m 4c066e91c80 --- arc_data 20.9g 20.9g 'ZFS File Data' shows the size of KOM statistics of 'arc_data'. 'ZFS Metadata' shows the size of KOM statistics of 'arc_meta'. Solaris 11.3 with SRU 17.5 and without SRU 21.5 > ::memstat -v Page Summary Pages Bytes %Tot ---------------------------- ---------------- ---------------- ---- Kernel 636916 4.8G 4% Kernel (ZFS ARC excess) 16053 125.4M 0% Defdump prealloc 291049 2.2G 2% ZFS Metadata 137434 1.0G 1% ZFS File Data 4244593 32.3G 25% Anon 114975 898.2M 1% Exec and libs 2000 15.6M 0% Page cache 15548 121.4M 0% Free (cachelist) 253689 1.9G 2% Free (freelist) 11064959 84.4G 66% Total 16777216 128G ::memstat on Solaris 11.3 SRU 17.5 or later has '-v' option to show the details. 'ZFS File Data' and 'ZFS Metadata' shows the KOM stat same as before. In addition, 'Kernel (ZFS ARC excess)' shows the wasted memory of the sum of 'ZFS File Data' and 'ZFS Metadata'. KOM can keep allocated memory which is not actually used at the moment, which is considered wasted. Solaris 11.3 SRU 21.5 or later > ::memstat -v Page Summary Pages Bytes %Tot ---------------------------- ---------------- ---------------- ---- Kernel 671736 2.5G 6% Kernel (ZFS ARC excess) 21159 82.6M 0% Defdump prealloc 361273 1.3G 3% ZFS Kernel Data 131699 514.4M 1% ZFS Metadata 42962 167.8M 0% ZFS File Data 8857479 33.7G 84% Anon 99066 386.9M 1% Exec and libs 2050 8.0M 0% Page cache 9265 36.1M 0% Free (cachelist) 14663 57.2M 0% Free (freelist) 273905 1.0G 3% Total 10485257 39.9G In addition to the information prior to Solarsi 11.3 SRU 21.5, 'ZFS Kernel Data' shows the size allocated from 'kmem_default_zfs' arena (and its overhead). Solaris 11.4 or later > ::memstat -v Usage Type/Subtype Pages Bytes %Tot %Tot/%Subt ---------------------------- ---------------- -------- ----- ----------- Kernel 3669091 13.9g 7.2% Regular Kernel 2602037 9.9g 5.1%/70.9% ZFS ARC Fragmentation 14515 56.6m 0.0%/ 0.3% Defdump prealloc 1052539 4.0g 2.0%/28.6% ZFS 28359638 108.1g 56.3% ZFS Metadata 116083 453.4m 0.2%/ 0.4% ZFS Data 27959629 106.6g 55.5%/98.5% ZFS Kernel Data 283926 1.0g 0.5%/ 1.0% User/Anon 201462 786.9m 0.4% Exec and libs 3062 11.9m 0.0% Page Cache 29372 114.7m 0.0% Free (cachelist) 944 3.6m 0.0% Free 18033911 68.7g 35.8% Total 50297480 191.8g 100% 'ZFS ARC Fragmentation' under 'Kernel' shows the wasted memory. Why values reported by ::memstat is different from size reported by arcstats? There are a few factors. ARC size includes cached on-disk file data, cached on-disk meta data, and various in-core data. But ::memstat does not report each of them. Prior to Solaris 11.2, only 'ZFS File Data' is reported. Even on Solaris 11.2 and 11.3, in-core data is not reported. Also the accounting by arcstats and ::memstat does not completely match. ::memstat on Solaris 11.3 SRU 21.5 or later reports in-core data as 'ZFS Kernel Data', though in-core data counted by arcstats and by ::memstat are not exactly the same. Another factor is wasted memory in kmem caches. Consider a possible scenario here: customer ran a workload that was largely 128K blocksize based. This resulted in filling up the ARC cache with say X GB of 128K blocks. The customer then switched to a workload that was 8K based. The ARC cache now filled up Y GB of 8K blocks (the 128K blocks are evicted). When the 128K blocks are evicted from the ARC cache, they are returned to the 'zio_data_buf_131072' cache, where they will stay (unused by the ARC) until either re-allocated or "reaped" by the VM system. Under such a condition, 'ZFS File Data' shown by ::memstat can be much higher than the ARC size. Especially, from Solaris 11.1 with SRU 3.4 through Solaris 11.1 with SRU 21.4, large pages are used by default and the situation can be worse. ::memstat reports such waste as 'Kernel (ZFS ARC excess)' on Solaris 11.3 SRU 17.5 or later, or 'ZFS ARC Fragmentation' on Solaris 11.4 or later. Also it could happen 'ZFS File Data' is higher than the ARC size even though 'ZFS ARC excess / ZFS ARC Fragmentation' is not high. In this case, the ARC memory is freed but still have KOM objects associated. As discussed above, it is clear that reported values by ::memstat do not have to match with the value of ZFS ARC size. It is not an issue if ::memstat values are more or less than ZFS ARC size.