Now, the thing is, one of the disks (Disk-Z) actually contains the information I want to keep. Our goal is to put together a 3 disk RAIDZ array - equivalent of Raid Level 5, which uses a disk to store parity bits, allowing a single disk loss out of the three. Hence, the idea that a pool is a collection of virtual devices. So, the (original) idea is to add a new (vdev) raid array to a pool to expand the pool, instead of adding a single disk to a vdev raid array. Plus, waiting for the array to be rebuilt for days isn’t practical for businesses. The reason has been that adding a new drive puts strain on the array, and elevates the chances of other disks failing. Something that I found surprising about ZFS when compared to mdadm was that it doesn’t allow expansion of an array - until now 3. An NVMe drive can be set as a cache vdev to be used for this purpose. But, it also has a level 2 ARC, which can house data evicted from ARC. ZFS Cache: ZFS uses ARC (Adaptive Replacement Cache) to speed up reads out of RAM.If a log vdev is set, ZFS would use that - so makes sense to use an NVMe drive as a log vdev (also called SLOG - separate intent log, just a fancy term). ZFS Intent Log ( ZIL): A write-ahead log used to log operations to disk before writing to the pool.RAID-Z: A variation of Raid 5 implemented by ZFS, which can provide write-atomicity (via “copy-on-write”) required to avoid write holes.Write hole: When the parity data doesn’t match up against the data in the other drives, and you can’t determine which drive has the right data.Virtual Devices: vdev can be a file, disk, a RAID array, a spare disk, a log or a cache.ZFS Pool: A pool is a collection of one or more virtual devices.ZFS Conceptsīefore we get started, let’s familiarize ourselves with some basic concepts of ZFS: Lawsuits aside, the OpenZFS project has active contributions 2, been in development for a while. The project seems stable from what I could see - even though Linus Torvalds doesn’t want to merge ZFS filesystem code into the Linux Kernel, because … Oracle likes lawsuits 1. That sealed the decision in favor of ZFS. How can you build a reliable storage without block checksums?! - I ask sarcastically Or, actually the other way around - I realized that mdadm doesn’t do checksumming. But, later learnt that ZFS does checksumming. And OpenZFS, an open source derivative of the ZFS project, Zettabyte file system, developed by Sun Microsystems. In Ubuntu Linux, the main choice is between two: mdadm, otherwise known as Linux RAID. So, it made sense to go with a RAID software on Ubuntu, not a dedicated NAS OS. But, in this case, given I wanted to solve two purposes, I wanted to stick to Ubuntu Server. Typically, you’d pick up a TrueNAS, or an Unraid OS and install that. The combination of both the workloads - a load balancer and a NAS is a good one. And this one had very little disk, and a pretty decent CPU. Storage servers don’t need much CPU, just a lot of disk space. I later had an idea to use that server for storage too. In a previous post, I showed how to put together a cheap load balancer server for $500. Detailed Guide to Setting Up ZFS RAID on Ubuntu 22.04
0 Comments
Leave a Reply. |