I currently have a two disk-zfs mirror which has been filling up recently. So I decided to buy anothet drive, but when I started thinking about it, I was unsure on how to actually make it usable. The issue is that I have ~11Tb on the existing pool (2 12 Tb drives (a,b)) and no spare drives of that size to copy over all my data to while creating the new 3-drive pool with the same drives and the additional new drive ©.

I saw there is a way to create a “broken” pool with just two drives (a,b), while keeping the data on the remaining drive ©, then copying the data over to the pool and ‘reparing’ it afterwards with the new drive ©.

As I only have 11 Tb of data which would theoretically fit one disk, would I be able to:

  • keep the old pool
  • initialize the new pool with just one drive and copy over the data
  • detach one drive from the old pool, add it to the new pool (if possible, would there allready be parity data generated on this drive at that point? Wold the parity be generated in a way that would allow me to lose the other drive in the pool and recover the data from the remaining pool (drive) alone?)
  • destroy old pool, add last drive to new pool

I would be able to back up my important data, but don’t have enogh space to also back up my media library which I’d like to not have to rebuild.

alternatively: anyone in Berlin wanna loan me a 12 Tb drive?

    • needankeOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 days ago

      I back up my personal data for which I have the space. I do not back up my media, as I can’t justify the costs for another drive that size and would be fine with losing in the offchance I loose two drives in my array (or my entire server fucking up).

  • april@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    I would detach one drive from the mirror first and make the raidz1 with the two drives if that’s possible (not sure if it lets you create a pool in a degraded state)

  • amd@gts.amd.im
    link
    fedilink
    arrow-up
    2
    ·
    10 days ago

    @needanke @tag-selfhosted This is the bummer I always run into with running zfs at small scales. I have been running pools of z1 mirrors which helps a little, but I still have to maintain tons of excess storage space.

    Someday I’m probably going to rip it all out and go back to LVM

  • one_knight_scripting@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 days ago

    Interesting… Though I know nothing about your particular setup, or migrating existing data, I have a similar project in the works. This project is to automatically setup a ZFS RAID 10 on Ubuntu 24.04.

    If you are interested in seeing how I am doing it, I used the openzfs root on Debian/Ubuntu guides.

    Debian

    Ubuntu

    For the code, take a look at this git hub: https://github.com/Reddimes/ubuntu-zfsraid10/

    One thing to note is this runs two zpools, one for / and one for /boot. It is also specifcally UEFI and if you need legacy you need to change the partitioning a little bit(see init.sh)

    BE WARNED THAT THIS SCRUBS ALL FILESYSTEMS AND DELETES ALL PARTITIONS

    To run it, load up a ubuntu-server live cd and run the following:

    git clone --depth 1 https://github.com/Reddimes/ubuntu-zfsraid10.git
    cd ubuntu-zfsraid10
    chmod +x *.sh
    vim init.sh    # Change all disks to be relevant to your setup.
    vim chroot.sh    # Same thing here.
    sudo ./init.sh
    

    On first login, there are a few things I have not scripted yet:

    apt update && apt full-upgrade
    dpkg-reconfigure grub-efi-amd64
    

    There are two parts to automating this, either I need to create a runonce.d service(here). Or I need to add a script to the users profile.d directory which goes ahead and deletes itself. And also I need to include a proper netplan configuration. I’m simply not there yet.

    I imagine in your case you could start a new pool and use zfs send to copy over the data from the old pool. Then remove the old pool entirely and add the old disks to the new pool. I certainly have never done this though and I suspect there may be an issue. The other option you have (if you have room for one more drive) is to configure it into a ZFS RAID 10. Then you don’t need to migrate the data, but just need to add an additional vdev mirror with the additional drive and resilver.

    One thing I tried to do was to make the scripts easily customizable. It still is not yet ready for that, though. You could simply change the zpool commands in the init.sh.

    • needankeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 days ago

      Sounds interesting, but while I have room for one more drive, I don’t want to spend money for one more drive xD (As mentioned, I have >= 12Tb drives, so another one I don’t really need would hurt the wallet quite a bit.)

      • one_knight_scripting@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        That is totally fair. Actually I just upgraded to 12 TB drives and that’s why I’m working on this. So huge props to your design choice. Also props for using zfs, I feel like it flies under the radar a lot.

    • needankeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      But the low-cost pool would probably not have the capacity to hold my data (or not be low-cost).

  • nixcamic@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 days ago

    So the broken pool is kinda stupid and you shouldn’t do it, you will be running without parity the whole time but if you want to risk it it does work.

    Or… If you have the drives and space, you can combine a bunch of smaller drives with mdadm (assuming Linux but freebsd has geom I think) and then use that as your third drive, then once everything is copied do a zpool replace. That way you keep full parity the whole time.

    Edit: latest version of zfs supports raidz expansion. So you could create a 2 drive raidz1 then copy everything over then expand it. You will still be running your source disk without parity but at least the destination would be safe.

    • needankeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      Sadly I don’t have 11 Tb worth of smaller empty drives around (and not even eniugh sata-Ports, so I’d also have to buy an expansion-card).

      Yeah, I don’t like the broken pool idea either, that’s why I was hoping there was a better method.