How to Resize Solaris ROOT File System

How to Resize Solaris ROOT File System

Friends, It is quite possible you have faced a similar kind of scenario in your Solaris administration journey. In case if you have systems running over the years.

At the time of server build, you might have less space in the root file system or var file system. Servers get patched with all cycles of patches released from OEM vendors from time to time and the root file system and mostly var file system gets full over the times.

Since we are discussing here LDOM so it,s Solaris. You may know all patched gets copied to /var/sadm which tends to full var FS and may leave you in a scenario where you can not apply any patch or update from OEM vendors like Oracle or Veritas etc. In that scenario how to resize solaris root file system in Solaris going to help.

Let me share my similar kind of experience and solution adopted by me to mitigate the requirement and complete my work to correlate with the requirement.

I was working on updating the latest Solaris 10 using tools or procedures available in my customer environment. It went absolutely fine but I left with very little space in var FS.

I was supposed to update veritas as well on my Solaris 10 LDOM from 6.2.0.000 to 6.2.1.500. When I started to update veritas patches for solaris using the method provided by veritas or procedure we have in our customer environment I got the below error which is very much clear that we need more space in the var file system on Solaris 10 ldom for veritas upgrade to work.

"CPI ERROR V-9-0-0 671 MB is required in the /var volume and only 324 MB is 
available on dev001. 348 MB needs to be freed up in the /var volume."

Table of Contents

VALIDATION

1. Validation of issue is more important than a solution to understand the problem and then apply resolution keeping all the awareness of what you are doing and what may be the impact. Taking some snaps which will help if you need to revert back to the solution. Since I was already working on the server I have checked the file system status and found it as below.

# df -h /var
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c0d0s5        7.9G   7.5G   324M    96%    /var
# cat /etc/vfstab
#device    device   mount     FS      fsck    mount   mount
#to mount  to fsck  point     type    pass    at boot options
fd      -  /dev/fd fd -       no      -
/proc   -       /proc   proc    -       no      -
/dev/dsk/c0d0s1 -       -       swap    -       no      -
/dev/dsk/c0d0s0 /dev/rdsk/c0d0s0  /       ufs  1       no      -
/dev/dsk/c0d0s5 /dev/rdsk/c0d0s5  /var    ufs     1       no   -
/devices   -       /devices        devfs   -       no      -
ctfs    -       /system/contract        ctfs    -       no     -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -
sharefs         -       /etc/dfs/sharetab       sharefs -     no      -

2. Secondly I have seen if I can remove some logs or just to know what folder or directory has eaten more space to look into it to remediate the situation and find like below. As I stated earlier sadm directory was eating most of the space.

This is OEM Solaris (Oracle/Sun) patch directory and not advised to touch to keep the integrity of OS intact as OS may start misbehaving in case if it is removed and there was no such folder which we can truncate to get more space.

Need to run this command being in /var directory to get all 
folders space consumption in MB and GB.
# du -sh *|egrep "M|G"
  56M   adm
 2.6M   apache
 1.0M   cache
  58M   centrifydc
  40M   cron
  16M   log
 193M   opt
 208M   pca
  77M   preserve
 6.7G   sadm
  21M   sol
  30M   spool
 2.7M   svc
  41M   tmp
 2.2M   tomcat8
  62M   vx

3.So now we have need to increase file system. Step  how to increase root file system in LDOM . For instance in my scenario my /var was of 8GB and i have decided to add another 4 GB.

4.I ran the format command to see the available disk configuration in LDOM.

Please see high lighted text it clearly shows what partition is in use and their corresponding mount points. You can see the disk size as well. In this case, it was a 36GB disk used as a boot device from Control Domain (CDOM).

If you navigate further you can note down the start and end cylinder number for each partition. Please make sure you note it down somewhere. This task is intrusive in nature as we are editing the partition of solaris itself so please pay attention while work and keep yourself without distraction.

# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
c0d0 <SUN-DiskImage-36GB cyl 65533 alt 2 hd 1 sec 1152>
/virtual-devices@100/channel-devices@200/disk@0
Specify disk (enter its number): 0
selecting c0d0
[disk formatted, no defect list found]
Warning: Current Disk has mounted partitions.
/dev/dsk/c0d0s0 is currently mounted on /. Please see umount(1M).
/dev/dsk/c0d0s1 is currently used by swap. Please see swap(1M).
/dev/dsk/c0d0s5 is currently mounted on /var.Please see umount(1M).
FORMAT MENU:
disk       - select a disk
type       - select (define) a disk type
partition  - select (define) a partition table
current    - describe the current disk
format     - format and analyze the disk
repair     - repair a defective sector
show       - translate a disk address
label      - write label to the disk
analyze    - surface analysis
defect     - defect list management
backup     - search for backup labels
verify     - read and display labels
save       - save new disk/partition definitions
volname    - set 8-character volume name
!<cmd>     - execute <cmd>, then return
quit
format>
partition> p
Current partition table (original):
Total disk cylinders available: 65533 + 2 (reserved cylinders)
Part      Tag    Flag     Cylinders  Size     Blocks
0       root    wm    7282 - 25486   10.00GB  (18205/0/0) 20972160
1       swap    wu       0 -  7281   4.00GB   (7282/0/0)   8388864
2     backup    wm       0 - 65532   36.00GB  (65533/0/0) 75494016
3 unassigned    wm       0           0        (0/0/0)            0
4 unassigned    wm       0           0        (0/0/0)            0
5        var    wm   25487 - 40050   8.00GB   (14564/0/0) 16777728
6 unassigned    wm       0           0        (0/0/0)            0
7 unassigned    wm       0           0        (0/0/0)            0

5.SOLUTION

This is the solution step please be careful. Continued from step 4.Till now we know what is the partition details and starting and ending cylinder number for the var file system.

So let’s edit the partition table for var to make it 12 GB. Select partition number 5 in this case from the format menu and answer queries like the below example. In case of any sector/cylinder wrongly configured overlapping here may lead to data loss. Just for the safer side have the last backup ready to restore data if required.

partition>5>>>Attention here see partition 5 is selected here for 
edit.<<<
Part  Tag    Flag Cylinders       Size            Blocks
5     var    wm   25487 - 40050   8.00GB    (14564/0/0) 16777728
Enter partition id tag[var]: var
Enter partition permission flags[wm]: wm
Enter new starting cyl[25487]: 25487>>>Keep the starting cylider 
number same<<<
Enter partition size[16777728b, 14564c, 40050e, 8192.25mb, 8.00gb]:
12gb   >>>Required size entered here<<<
partition> label              >>> DO NOT FORGET LABELING IT <<<
Ready to label disk, continue? y

6.Now grow var file system using growfs command as below.

# growfs -M /var /dev/rdsk/c0d0s5
Warning: inode blocks/cyl group (95) >= data blocks (48) in last 
cylinder group. This implies 768 sector(s) cannot be allocated.
/dev/rdsk/c0d0s5:       25165824 sectors in 4096 cylinders of 48 
tracks, 128 sectors 12288.0MB in 256 cyl groups (16 c/g, 48.00MB/g,
5824 i/g) 
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488,
885920,24189728, 24288160, 24386592, 24485024, 24583456, 24681888,
24780320, 24878752, 24977184, 25075616

That,s it about How to resize solaris root file system. You are done. From Below you can see the var file system is now 12 GB. I continued by veritas upgrade successfully and work accomplished. Keep learning as much as you can and extend your knowledge in the subject and excel. See all the other topics

7.Post Validation

After work. Please check var is of 12 GB size.

# df -h /var
Filesystem                   size   used  avail capacity  Mounted on
/dev/dsk/c0d0s5          12G   7.5G   4.3G    64%    /var

Note: All actions performed here required root-level access on the server.

We are sharing our practical experiences related to a specific task. How a system administrator can perform specific tasks like How to resize solaris root file system. Most of our posts are almost related to how too,s with a goal to share real-world practical experiences. If you think it is having worth sharing with those who need it, We will really appreciate it.

Our expectation is that it can be shared on all social media like Facebook, Twitter, Instagram, Pinterest, etc as much as possible to maximize it,s reach intended persons to achieve it,s real meaning and it can help many communities people…

You can subscribe to our newsletter to get the latest updates or you can use RSS to get a regular feed. There is absolutely no bullish marketing mails or spam kind of mails. Use our contact form to reach us in case of any query doubt or for anything, we will be able to assist asap. 

If you like to contribute to us. Please do not hesitate to contribute and put a pat on our back which will keep us motivated to keep delivering content like this.

Other Related topics

How to Check Linux Version

How to Remove Veritas File System

How to create a veritas file system in solaris

Extend VxFS in Solaris

How to fix failover service group in vcs

How to Activate Volume Group in Linux

Solaris 11 LDOM Recovery

How to unencapsulate the rootdisk in VxVM

What is Inodes in Linux

Linux Change Password