| 00:26:50 | | AmAnd0A quits [Read error: Connection reset by peer] |
| 00:27:13 | | icedice quits [Client Quit] |
| 00:31:10 | | AmAnd0A joins |
| 00:35:59 | | adia quits [Client Quit] |
| 00:55:36 | | wessel1512 quits [Quit: Ping timeout (120 seconds)] |
| 00:55:59 | | wessel1512 joins |
| 01:20:00 | | benjinsm is now known as benjins |
| 01:20:01 | | benjins is now authenticated as benjins |
| 01:24:14 | | wessel1512 quits [Client Quit] |
| 01:24:38 | | wessel1512 joins |
| 01:24:41 | | icedice (icedice) joins |
| 01:46:58 | | AmAnd0A quits [Ping timeout: 252 seconds] |
| 01:47:37 | | AmAnd0A joins |
| 02:21:29 | <andrew> | I'm trying to figure out what width of RAIDZ1 I should use for an SSD pool I'm thinking of building |
| 02:21:49 | <andrew> | I keep seeing this "power of two plus parity" rule being thrown around, but I also see conflicting information |
| 02:22:01 | <andrew> | is there any reason why I *shouldn't* use a 4-wide RAIDZ1? |
| 02:27:31 | | PredatorIWD joins |
| 02:31:07 | | PredatorIWD quits [Client Quit] |
| 02:39:28 | | Rhodezzz quits [Ping timeout: 265 seconds] |
| 02:39:50 | <TheTechRobo> | Whats that org that operates their own DNS and lets you register free domain names that only work if you use their nameservers? |
| 02:41:00 | <@JAA> | OpenNIC? Though there are others which are similar, that's the big one. |
| 02:41:31 | | icedice quits [Client Quit] |
| 02:43:41 | <TheTechRobo> | JAA: Ah yes I believe that's the one. Thanks! |
| 03:13:16 | | icedice (icedice) joins |
| 03:14:22 | | icedice quits [Client Quit] |
| 03:24:34 | | icedice (icedice) joins |
| 04:33:03 | | Danielle quits [Read error: Connection reset by peer] |
| 04:35:16 | | BlueMaxima quits [Read error: Connection reset by peer] |
| 05:23:34 | | michaelblob quits [Read error: Connection reset by peer] |
| 06:22:27 | | nicolas17 quits [Client Quit] |
| 06:22:40 | | nicolas17 joins |
| 06:45:19 | | Zaxoosh quits [Remote host closed the connection] |
| 06:45:37 | | spirit quits [Quit: Leaving] |
| 07:07:50 | | Arcorann (Arcorann) joins |
| 07:17:10 | | icedice2 (icedice) joins |
| 07:20:17 | | icedice quits [Ping timeout: 265 seconds] |
| 07:20:40 | | icedice (icedice) joins |
| 07:22:50 | | icedice2 quits [Ping timeout: 252 seconds] |
| 09:03:13 | | icedice quits [Client Quit] |
| 09:22:54 | | icedice (icedice) joins |
| 10:33:36 | | imer quits [Quit: Oh no] |
| 10:35:56 | | imer (imer) joins |
| 11:46:18 | | ymgve joins |
| 11:56:03 | | BearFortress quits [Client Quit] |
| 12:34:06 | | BearFortress joins |
| 13:10:26 | | AmAnd0A quits [Ping timeout: 252 seconds] |
| 13:11:16 | | AmAnd0A joins |
| 13:17:09 | | icedice quits [Client Quit] |
| 13:26:23 | | HP_Archivist (HP_Archivist) joins |
| 13:38:46 | | icedice (icedice) joins |
| 13:41:05 | | HP_Archivist quits [Client Quit] |
| 13:49:22 | | Chris5010 quits [Ping timeout: 265 seconds] |
| 14:01:34 | | Chris5010 (Chris5010) joins |
| 14:25:35 | | Arcorann quits [Ping timeout: 252 seconds] |
| 14:56:13 | | hitgrr8 joins |
| 16:19:10 | | Zaxoosh joins |
| 16:43:40 | | T31M is now authenticated as T31M |
| 17:12:26 | | Chris50103 (Chris5010) joins |
| 17:14:18 | | Chris5010 quits [Ping timeout: 265 seconds] |
| 17:14:18 | | Chris50103 is now known as Chris5010 |
| 17:22:05 | | icedice quits [Client Quit] |
| 18:10:16 | | Wingy quits [Client Quit] |
| 18:11:32 | | icedice (icedice) joins |
| 18:13:20 | | gfhh joins |
| 18:55:09 | <chrismeller> | Andrew: i was doing some research on building a NAS a while back and it seems like most people recommend that you just use ZFS and let it do its magic, rather than setting up some specific version of RAID |
| 19:06:32 | <fireonlive> | there's ZFS RAIDZ1/Z2/Z3 |
| 19:07:20 | <fireonlive> | (or just straight up ZFS mirror) |
| 19:08:18 | <chrismeller> | Z1 is the default i believe... assuming you have enough disks of course |
| 19:30:03 | | tzt_ is now known as tzt |
| 19:30:07 | <@JAA> | Magic? That seems awkward. The whole point of these is that you can select how much redundancy you want/how many disk failures you want the system to tolerate. |
| 19:35:09 | <@JAA> | I could imagine that a raidzX with 2^n+X disks would be slightly better for the parity calculation performance since it could work on units that are a power of two. But whether that actually matters in practice with modern hardware... Benchmark time. |
| 20:28:26 | | Zaxoosh quits [Remote host closed the connection] |
| 20:45:38 | | driib quits [Ping timeout: 252 seconds] |
| 20:52:23 | | hitgrr8 quits [Client Quit] |
| 20:55:08 | | AmAnd0A quits [Read error: Connection reset by peer] |
| 20:55:27 | | AmAnd0A joins |
| 20:58:57 | | icedice quits [Client Quit] |
| 21:24:58 | | icedice (icedice) joins |
| 21:32:06 | | driib (driib) joins |
| 21:38:19 | | driib quits [Client Quit] |
| 21:38:59 | | driib (driib) joins |
| 21:48:09 | | driib quits [Client Quit] |
| 21:48:54 | | driib (driib) joins |
| 21:53:16 | <immibis> | creating channels on discord breaks because it uses "open"ai to suggest emojis, and "open"ai is down |
| 21:53:24 | <immibis> | This is the future of the corporatized internet |
| 21:54:35 | | driib quits [Client Quit] |
| 21:55:18 | | driib (driib) joins |
| 21:58:36 | | driib quits [Client Quit] |
| 21:59:26 | | driib (driib) joins |
| 22:00:16 | <icedice> | lmao |
| 22:19:05 | <Ryz> | Woooooow |
| 22:19:06 | <@Fusl> | andrew: the general rule of thumb is to `fdisk -l /dev/sdX` on the disk, checking the physical sector size and then applying the proper ashift= value during zpool creation. for a typical hdd and ssd with 4096 bytes sector size, that is 2^12=4096, so ashift=12 |
| 22:22:50 | | VerifiedJ quits [Remote host closed the connection] |
| 22:23:24 | | VerifiedJ (VerifiedJ) joins |
| 22:27:30 | | driib quits [Client Quit] |
| 22:28:03 | | driib (driib) joins |
| 22:44:16 | | AmAnd0A quits [Ping timeout: 252 seconds] |
| 22:45:11 | | AmAnd0A joins |
| 22:46:50 | | AmAnd0A quits [Read error: Connection reset by peer] |
| 23:02:40 | <andrew> | Fusl: that part I know, I'm asking about whether the number of disks per vdev in a RAIDZ1/2/3 configuration actually matters |
| 23:03:32 | <andrew> | the problem is that to see how well a specific hardware configuration will perform, I will need to buy the hardware, and before buying the hardware, I'd like to have a decent idea of how well that hardware would perform |
| 23:04:41 | <@Fusl> | it does for recovery/rebuild times. a raidz over 90 disks for example is slower than a raid0 over 6x raidz over 15 of the 90 disks each |
| 23:05:23 | <@Fusl> | and generally, raidz and hdd's don't mix together very well after the zpool becomes a little fragmented |
| 23:05:33 | <@Fusl> | (mostly due to the increased random i/o) |
| 23:05:43 | <andrew> | to provide context: I'm debating which (and how many) SSDs to buy to expand my SSD capacity |
| 23:06:11 | <andrew> | is it a waste of money/IOPS to have RAIDZ1 over four disks instead of three? |
| 23:06:23 | <@Fusl> | nope, that's perfectly fine |
| 23:07:26 | <andrew> | it's that age-old problem of whether to buy more now for less cost per usable GB or buy less now for less cost since I'm probably not going to fill up the storage for a while |
| 23:09:16 | <@Fusl> | imho i'd go with more when using zfs since expanding a raidz isn't easy (i think you'll have to recreate the entire zpool if you want to expand to more disks) |
| 23:09:19 | <imer> | been a while since I looked into this, so hazy on the details, I seem to remember 10 drives per raidzN is the sweet spot of performance/space "wasted" for parity (can still add multiple 10drive vdevs per pool if you want though, so no issues there). not personally tested that with ssds though, I remember numbers checking out on spinning rust from |
| 23:09:19 | <imer> | "yeah, good enough" testing though |
| 23:09:34 | <andrew> | imer: I'm not buying ten SSDs right now :P |
| 23:09:40 | <imer> | well, i dont knoow |
| 23:09:51 | <imer> | I wouldn't be surprised if people were! |
| 23:10:15 | <andrew> | anyways, I'm trying to decide whether to buy used eBay SAS SSDs (like the Samsung PM1643) or some brand new PCIe Gen4 SSDs (for about 50% more cost) |
| 23:10:45 | <andrew> | the brand new drives are consumer grade but their sustained performance still likely exceeds those old SAS drives |
| 23:11:28 | | AmAnd0A joins |
| 23:12:36 | <andrew> | I have concerns about my LSI 9300-16i being a bottleneck if I bought a bunch of SAS SSDs - there's a 7 GB/s limit due to the PCIe 3.0 x8 link, and the SAS controllers allegedly handle "over 1 million IOPS" each, which will easily be saturated by reading from two of the SSDs |
| 23:13:37 | <andrew> | that being said, I'm not sure whether it actually matters that much in the real world, chances are the CPU wouldn't be able to keep up anyways |
| 23:15:28 | <@JAA> | Isn't raidz expansion a thing now? I remember hearing about it like a year or two ago. |
| 23:15:40 | <andrew> | JAA: RAIDZ expansion is still a work in progress, I've been following that PR for a while |
| 23:15:47 | <@JAA> | Ah |
| 23:16:03 | <@JAA> | I guess it was a 'SOOON!!!!' thing then that I'm thinking of. |
| 23:17:25 | <andrew> | I know the thing I'm doing is a bit strange, I'm planning on running a database workload, which needs IOPS, but for cost savings I'm planning on using RAIDZ1 instead of mirrors, and to paper over the IOPS penalty of the RAIDZ I'm considering buying some hecking fast SSDs :P |
| 23:50:42 | | BlueMaxima joins |
| 23:58:52 | | Arcorann (Arcorann) joins |