#StackBounty: #20.04 #ssd #zfs installed Ubuntu 20.04 with ZFS enabled, want to add another SSD to pool

Bounty: 50

I Installed ubuntu 20.04 with the ZFS experimental option enabled, indeed under disks and gparted I can see that most of my current SSD is occupied by a ZFS_member.

I have very minor experience with ZFS. I set up my zfs in a previous ubuntu 18 install but the OS itself was not on ZFS.

anyways : How do I add another SSD (not the same size or brand) to my current ZFS pool?

I just want the most basic "add space" I don’t really care about x2, x3 or x10 redundancy (in fact I don’t want to alter current ZFS redundancy setup, whatever it is). I just want extra space.

disks readout

I found this : https://unix.stackexchange.com/questions/530968/adding-disks-to-zfs-pool

but it doesn’t answer my question for someone of my level.

For example none of the two people who answered specified if it was :

zpool create addonpool /dev/sdb
zpool add addonpool mirror /dev/sda /dev/sdb

Or just :

zpool add rpool mirror /dev/sda /dev/sdb#"rpool" name of existing pool, apparently

Nor what syntax is used to point to drives.

all the links I found are referencing syntaxes like : c0t3d0, c1t3d0, and c1t1d0.

I can’t find such an identifier, this guide : https://www.thegeekdiary.com/zfs-tutorials-creating-zfs-pools-and-file-systems/
uses echo | format this does not work in ubuntu 20.04

I do know their guid :

t@tsu:~$ sudo lshw -class disk
[sudo] password for t: 
  *-disk:0                  
       description: ATA Disk
       product: Samsung SSD 850
       physical id: 0
       bus info: scsi@2:0.0.0
       logical name: /dev/sda
       version: 2B6Q
       serial: S2RBNX0J524197X
       size: 465GiB (500GB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=32f4df93-2b50-4a68-a888-f0570adac413 logicalsectorsize=512 sectorsize=512
  *-disk:1
       description: ATA Disk
       product: Crucial_CT525MX3
       physical id: 1
       bus info: scsi@4:0.0.0
       logical name: /dev/sdb
       version: R040
       serial: 172918010661
       size: 489GiB (525GB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=d3e2b4ab-2c44-4da8-ac0c-fdb8053d35da logicalsectorsize=512 sectorsize=512

I did test just zpool alone to get the man, this works so I know that I’d be able to run the above I just want to not mess it up.

Also I’m planning on doing this by logging out and running my commands in a tty. It does nag me that technically at such a point I have not really exited the environment using my ZFS pool so will that work or should this be done from a Live USB?

t@tsu:~$ zpool status
  pool: bpool
 state: ONLINE
  scan: none requested
config:

    NAME                                    STATE     READ WRITE CKSUM
    bpool                                   ONLINE       0     0     0
      73ea4055-b5ea-894b-a861-907bb222d9ea  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: none requested
config:

    NAME                                    STATE     READ WRITE CKSUM
    rpool                                   ONLINE       0     0     0
      7905bb43-ac9f-a843-b1bb-8809744d9025  ONLINE       0     0     0

errors: No known data errors


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.