00:30:43<mind_combatant>oh, i always just assumed YAML stood for Yet Another Markup Language, surprised to learn it doesn't as much as the other stuff.
00:32:13<nicolas17>it originally did stand for that
00:35:26lunik1 quits [Quit: :x]
00:36:02lunik1 joins
00:45:24ericgallager joins
00:46:25<@imer>favorite yaml factoid is it being a json superset, tend to surprise people with that "just json serialize, plop it in and it'll just work"
00:46:49<@imer>superset might not be the right word. dunno
00:51:11nine quits [Ping timeout: 260 seconds]
00:51:27nine joins
00:51:27nine quits [Changing host]
00:51:27nine (nine) joins
00:54:48nine quits [Client Quit]
00:55:01nine joins
00:55:01nine quits [Changing host]
00:55:01nine (nine) joins
00:55:35ericgallager quits [Client Quit]
00:57:20<pabs>Debian debates AI models and the DFSG https://lwn.net/SubscriberLink/1018497/52dcfe7f7e9502be/
00:59:56nine quits [Ping timeout: 260 seconds]
01:03:01nine joins
01:03:01nine quits [Changing host]
01:03:01nine (nine) joins
01:04:16ericgallager joins
01:05:23nine quits [Client Quit]
01:05:36nine joins
01:05:36nine quits [Changing host]
01:05:36nine (nine) joins
01:10:17nine quits [Ping timeout: 258 seconds]
01:18:28Webuser317825 joins
01:45:05sec^nd quits [Remote host closed the connection]
01:45:28sec^nd (second) joins
01:56:59ericgallager quits [Client Quit]
02:11:49<nicolas17>pabs: Glaser be like "I disagree with your anti-AI proposal because it's not anti-AI enough"
02:12:50<pabs>I haven't followed the threads, but I'm with Glaser :)
02:13:07<nicolas17>me neither, I only read your lwn link :P
02:13:27<nicolas17>pabs: today a friend tweeted
02:13:29<nicolas17>"I'm doing interviews, and I'm going to give you a tip that no one asked me for: don't use ChatGPT in "listening" mode: 1st, because it shows when you don't know and answer things that aren't true, and 2nd, if you wear glasses, IT SHOWS UP IN THE REFLECTION WHEN YOU ALT TAB AND THE LLM WRITES THE ANSWER AND YOU READ IT."
02:13:33<pabs>its not a free model unless everything is free, including the training data/code, model weights etc
02:13:43<pabs>see also https://salsa.debian.org/deeplearning-team/ml-policy
02:13:59<pabs>lol
02:16:03<nicolas17>pabs: shipping huge training data as sources is highly impractical, and getting enough DFSG-free data to make the model work is highly impractical, so I think that position would basically ban LLMs from Debian... but *I'm okay with that*
02:17:10<pabs>both true, but possible to create workarounds for
02:17:44<pabs>the compute for actually doing retraining is a big problem too
02:17:51<nicolas17>retraining the model as part of the build is bonkers
02:17:55<nicolas17>nobody does that
02:18:07<nicolas17>and it's definitely not reproducible
02:18:11<pabs>although Debian doesn't require packages actually build from source, so we can skip retraining
02:18:51<pabs>there are some models that would meet the strictest definition; rnnoise for example does now I think
02:19:06<pabs>(but the copies of rnnoise already in Debian don't meet it)
02:19:31<pabs>maybe Bergamot (Firefox machine translation) might too
02:19:41<nicolas17>there's plenty of source packages that contain icons and other images as both bitmaps and the original SVG, and I don't think it's a DFSG requirement that the SVG is rendered from source into PNG at package build time
02:20:43<pabs>right, it isn't. it is fucking stupid that it isn't a requirement though, with reasonable exceptions. more on that at https://wiki.debian.org/AutoGeneratedFiles
02:21:44<nicolas17>then I'd make *that* a requirement before even remotely thinking about AI training at package build time
02:25:05<pabs>don't think you would do it at normal package build time. I'd make a separate AI archive, have the training done there, and other packages Recommend the AI model packages
02:32:19ericgallager joins
02:33:22DigitalDragons (DigitalDragons) joins
03:28:25sec^nd quits [Remote host closed the connection]
03:28:55sec^nd (second) joins
03:44:03<mind_combatant>does anyone here have enough experience with ZFS to know, if i were to get another M.2 SSD in my computer, which would benefit more from having the SSD all to itself? L2ARC or ZIL SLOG? my current setup has a boot ext4 partition and both of those all living on the same M.2 SSD, and this is obviously not ideal. i intend to get 2 more M.2 SSDs now that i upgraded to a motherboard with the slots for a total of 3, but i can only afford
03:44:03<mind_combatant>one at a time, so i was wondering which type of cache should get the first one? figure here's as good a place as any to ask for help.
03:46:28<nukke>depends on your workload
03:46:49<nukke>if possible, get more RAM instead of adding an L2ARC drive
03:47:42<mind_combatant>absolutely doing that too
03:51:13<nicolas17>doesn't L2ARC only make sense if the cache is on something an order of magnitude faster than the storage?
03:51:23<nicolas17>so L2ARC on SSD and storage on SSD is pointless
03:54:55<mind_combatant>it is, the main bulk of the storage in in a RAIDZ1 array on 3 14TB spinning HDDs. only the root filesystem is that ext4 partition on SSD, /home and a few other parts are all mounted from the ZFS pool.
03:56:12DogsRNice quits [Read error: Connection reset by peer]
04:16:39<pabs>what M.2 SSD brands do folks recommend btw?
04:17:30Webuser317825 quits [Client Quit]
04:18:10nine joins
04:18:10nine quits [Changing host]
04:18:10nine (nine) joins
04:43:26<steering>I still just stick with Samsungs, they're expensive but I know they're not gonna be trash :P
04:43:38<steering>lot of customers at work seem to like Kingston
04:44:13Webuser839979 joins
05:02:24pabs still stuck on SATA, but thinking about getting a PCI card and drives
05:03:16<nicolas17>PCIe cards with M.2 slots are very cheap, I think they only have passive components
05:05:15<pabs>they would beat SATA by much? they have multiple slots?
05:07:49<steering>mmh, you can usually get a couple of slots, overall bandwidth all depends on a lot
05:08:05<steering>I mean, if you don't have native M.2, you probably have PCIe 3 or so
05:09:06<steering>(it's still plenty of speed tbh but might be relevant to the latency)
05:10:22<steering>in newer systems you usually have an M.2 slot attached directly to the CPU, for ultra blazing fast solid state goodness
05:14:02<steering>69197576704 bytes (69 GB, 64 GiB) copied, 85 s, 814 MB/s
05:14:29<steering>definitely beats SATA, is it noticeable compared to a SATA SSD for normal tasks? probably not :P
05:16:00<steering>my other disk - which isn't used yet - actually is sustaining 1.5GB/s read
05:41:03<mind_combatant>oh yeah, speaking of fast storage, https://hackaday.com/2025/04/21/pox-super-fast-graphene-based-flash-memory/ maybe the spiritual successor to optane is finally gonna happen?
05:44:31<pabs>"Demonstrated was a write speed of 400 picoseconds, non-volatile storage and a 5.5 × 106 cycle endurance with a programming voltage of 5 V"
05:44:41pabs wonders how that endurance compares with optane
06:02:35pabs quits [Read error: Connection reset by peer]
06:03:18pabs (pabs) joins
06:19:23ericgallager quits [Quit: This computer has gone to sleep]
06:28:55ericgallager joins
06:40:04ericgallager quits [Client Quit]
07:10:19Meli (Meli) joins
07:14:55APOLLO_03 joins
07:16:22APOLLO03 quits [Ping timeout: 258 seconds]
07:16:39APOLLO_03 quits [Client Quit]
07:29:38BornOn420 quits [Remote host closed the connection]
07:30:16BornOn420 (BornOn420) joins
08:23:37<pabs>https://www.washingtonpost.com/technology/2025/04/25/wikipedia-nonprofit-ed-martin-letter/