00:04:58BlueMaxima joins
00:37:54Arcorann (Arcorann) joins
01:20:39Lord_Nightmare quits [Quit: ZNC - http://znc.in]
01:24:11Lord_Nightmare (Lord_Nightmare) joins
02:42:55nicolas17 joins
02:43:03<nicolas17>JAA: https://github.com/mxmlnkn/rapidgzip
03:27:42<fireonlive>https://old.reddit.com/r/comics/comments/14hop45/unsinkable_oc
03:27:57<fireonlive>https://teddit.net/r/comics/comments/14hop45/unsinkable_oc
03:56:21AmAnd0A quits [Read error: Connection reset by peer]
03:56:38AmAnd0A joins
04:54:51BlueMaxima quits [Ping timeout: 258 seconds]
05:12:21BlueMaxima joins
05:23:30nicolas17 quits [Ping timeout: 265 seconds]
06:09:21hitgrr8 joins
06:11:57<systwi_>Hehehe, okay, this got quite a chuckle out of me.
06:12:02<systwi_>https://github.com/slavfox/Cozette#windows
06:12:36<systwi_>"Grab `CozetteVector.ttf`. If you want to get the bitmap versions to work, [follow the instructions from here](https://wiki.archlinux.org/index.php/installation_guide)."
06:12:48<systwi_>:-P
06:42:57MactasticMendez (MactasticMendez) joins
06:42:58BlueMaxima quits [Read error: Connection reset by peer]
08:44:27AmAnd0A quits [Read error: Connection reset by peer]
08:44:38AmAnd0A joins
08:52:44MactasticMendez quits [Client Quit]
11:32:31razul quits [Quit: Bye -]
12:00:10<@JAA>nicolas17: I assumed something like that existed but never looked for it. Neat, thanks. I doubt it provides any speedup on a single decompression, but should be useful for repeated access.
12:03:32justmolamola joins
12:12:41qwertyasdfuiopghjkl quits [Remote host closed the connection]
12:13:59justmolamola quits [Remote host closed the connection]
12:40:34<imer>kiska: just gonna boop you here, tracker stats both ETA's read "Out ETA", left one should be todo looking at query. I tried to quickly look at setting up some more stats, but I don't know what I'm doing in grafana lol, so gonna have to take a step back and do some learning first
12:43:21sec^nd quits [Ping timeout: 245 seconds]
12:43:40sec^nd (second) joins
13:03:32VickoSaviour joins
13:03:41<VickoSaviour>just some random question, is the kitty0706 Youtube channel archived?
13:04:00Iki joins
13:33:07balrog quits [Ping timeout: 265 seconds]
13:34:05Arcorann quits [Ping timeout: 252 seconds]
13:40:46Arcorann (Arcorann) joins
13:48:23Arcorann quits [Ping timeout: 252 seconds]
14:54:45VickoSaviour quits [Remote host closed the connection]
15:14:43<kiska>imer: Title error on my part :D
15:15:33<kiska>Well the db is influxdb 1.8 so it doesn't have flux enabled by default, so your gonna have to learn influxql :D
15:25:39<imer>^ that and figure out what data is available to query even, I'm sure there's an obvious way to do that, didnt spot it in a few minutes of random poking about though
15:35:05AmAnd0A quits [Ping timeout: 252 seconds]
15:35:12AmAnd0A joins
15:47:34balrog (balrog) joins
15:58:09Dango360 quits [Read error: Connection reset by peer]
16:00:29Dango360 (Dango360) joins
16:02:12AmAnd0A quits [Read error: Connection reset by peer]
16:02:29AmAnd0A joins
16:10:19<kiska>Explore is your friend https://grafana3.kiska.pw/explore?orgId=1
16:10:54<kiska>I am only exporting from the websocket a limited set of stuff to influxdb
16:14:19lk quits [Quit: lk]
16:31:04<fireonlive>kiska: i have a friend? 🥹
16:45:40<kiska>fireonlive Yes you do, its called influxdb-chan
16:47:07<fireonlive>(* ^ ω ^)
17:03:40lk (lk) joins
17:32:31sec^nd quits [Ping timeout: 245 seconds]
17:36:18pseudorizer (pseudorizer) joins
17:37:45sec^nd (second) joins
18:04:35<Doranwen>I've tried reading and searching a whole bunch to figure out how to create a separate file (on my /home partition, because that's what has space) for /tmp and I don't feel like I understand anything I'm reading well enough to muck with it - but I'm having issues with *something* filling up /tmp too much and running me out of root space
18:05:00Doranwen really really wishes she'd separated /tmp out to a different partition - and gave it a nice amount of space - when she set her system up
18:05:29<Doranwen>I can't tell if the 2nd option here is what I want or not: https://computingforgeeks.com/mount-tmp-on-a-separate-partition-in-linux/
18:05:52<Doranwen>I can copy commands easily enough, but I don't like to do it if I'm not fully understanding what's happening
18:12:14<BigBrain>Doranwen: second option looks good, no need to edit /etc/fstab though
18:13:54<BigBrain>i think you can just $ mount -o loop,nosuid,noexec,nodev,rw /home/tmp-file /tmp
18:15:25<Doranwen>Will that persist through a restart?
18:17:25<Doranwen>Also, apparently part of what was filling up my /tmp was a lot of stuff from LO, some of it from several months ago. /\o/\
18:17:48programmerq quits [Ping timeout: 265 seconds]
18:18:27<BigBrain>Doranwen: no, use /etc/fstab for that, thought you needed a temporary fix
18:19:22programmerq (programmerq) joins
18:21:06<Doranwen>BigBrain: Yes and no - I mean, right now temporary is needed. But I made a major goof in sizing the / partition when setting it up and also didn't separate out /tmp (this was years ago and I was much more n00bish than I am now), so I'm likely to run into this issue again - and any other fix is going to be WAY too complicated/hassle/etc.
18:21:47<Doranwen>I need a whole new system, is what I really need - given age of things and whatnot - but I don't have the $$ to purchase the parts so I'm trying to limp along with what I have while I finish up some massive projects.
18:22:24<Doranwen>Anyway, thank you for looking that over! I *thought* it looked like it, but I wanted someone else's opinion before I tried it.
18:28:37<Doranwen>Also, am I right in thinking I need to run the mount command to get it to mount *now*, and editing /etc/fstab is what will get it to mount when I restart?
18:28:49<Doranwen>(I'm not seeing how the latter mounts it now.)
18:37:47<Doranwen>OK, something did NOT work.
18:37:59<Doranwen>Now I try to open up LO and it just complains about write errors and won't open any document.
18:38:41<Doranwen>This is what I was afraid of.
18:41:31<Doranwen>Somehow something isn't getting access to what it needs and I am unable to use LO at all now.
18:46:24<Doranwen>BigBrain: I think I'm going to have to undo whatever it is I just did, because LO - even with --norestore - will *not* open up. It's unable to create a temporary file whatsoever.
18:50:57<BigBrain>change options behind -o, maybe needs some perms that are off
18:52:34<Doranwen>I unmounted /tmp temporarily so I could work with it, took a bit of work as I use vi like a few times per decade, lol.
18:53:34<Doranwen>I fear I don't know enough about the options to fix it, so I may have to just hope I can keep /tmp from filling up again.
18:54:18<Doranwen>I'll come back to this later and see if I can read some more on all of it.
18:56:12Naruyoko quits [Read error: Connection reset by peer]
19:02:14nicolas17 joins
19:05:39<nicolas17>JAA: that's the neat thing, it does provide speedup on single-file decompression without an index!
19:07:47<nicolas17>apparently it's a pretty novel approach, similar to pugz but not restricted by the file contents
19:09:01<nicolas17>pugz docs: "Only text files with ASCII characters in the range ['\t', '~'] are supported. For example, .tar.gz files are binary thus won't work. Why binary files are not supported: 1) we optimized the guessing of block positions for ASCII files (resulting in less false positives when scanning the bitstream for a deflate block), and 2) we optimized the code to encode unresolved back-references using 8 bits along with the decompressed text."
19:09:07<nicolas17>rapidgzip works on anything
19:11:55Naruyoko joins
19:16:26AmAnd0A quits [Read error: Connection reset by peer]
19:16:29AmAnd0A joins
19:55:21razul joins
19:59:38gfhh quits [Ping timeout: 252 seconds]
20:00:09<@JAA>nicolas17: Hmm, I guess my understanding of DEFLATE might be incomplete. I thought that a block could (indirectly) depend on any previous block as long as that reference is reused within the sliding window every time, which would mean that in the worst case, you couldn't beat sequential inflation.
20:01:30<nicolas17>hmm you could probably build a worst-case gzip that specifically defeats this tool
20:02:23<nicolas17>but in the average case it works, I think it processes as much as it can and leaves backreferences unresolved until the thread working on the previous block finishes?
20:03:04<@JAA>Right, makes sense.
20:09:47<nicolas17>also, a friend had a similar idea
20:10:34<nicolas17>yesterday I sent him the link to the rapidgzip paper
20:10:49<nicolas17>>ah fuck yeah they've done what I wanted to do
20:10:51<nicolas17>>FUK
20:10:52<nicolas17>>well at least I don't have to do it now
20:11:53<@JAA>:-)
20:29:50pseudorizer quits [Client Quit]
20:30:33pseudorizer (pseudorizer) joins
20:36:14<imer>kiska: https://transfer.archivete.am/14yghr/2023-06-25_22-34-22_zLaJhhYOMf.txt not quite sure how the size stat works, so done byte/s is wrong still if you want to have a look
20:40:09<kiska>imer: What is that again?
20:40:16<kiska>The whole dash or a panel?
20:40:47<kiska>Looks like the dash?
20:42:17<kiska>What size stat? Oh you mean the field?
20:42:38<kiska>Here it is in my code
20:42:38<kiska>points.push(new Point(project).floatField('size', json.megabytes).tag('downloader', json.downloader));
20:46:11<kiska>Essentially what I am doing is for every event that comes in on the websocket it gets pushed to influxdb
20:46:56<kiska>Here is the horrible js https://paste.kiska.pw/PlimBlankness
20:48:25<masterX244>ugly doesnt matter if it works, duct tape ftw
20:50:02<kiska>masterX244 IF you want to contribute to the duct tape known as the dashboard its here https://grafana3.kiska.pw/d/000000/archiveteam-tracker-stats?orgId=1&var-project=lineblog&var-downloaders=All
20:50:19<kiska>You can't save the dash, but you can copy the json to it and paste it somewhere where I can apply it
20:50:28<masterX244>got enough own ducttape from archival crap on my own infra
20:51:05<kiska>Oh yeah... btw do not open "Per user" unless you want it to chug your browser
20:51:21<kiska>Or do I don't have any responsibility of your browser
20:56:14fireonlive blames kiska
21:00:41TheTechRobo quits [Ping timeout: 252 seconds]
21:02:46AmAnd0A quits [Read error: Connection reset by peer]
21:03:25AmAnd0A joins
21:05:08TheTechRobo (TheTechRobo) joins
21:16:57AmAnd0A quits [Ping timeout: 258 seconds]
21:17:40AmAnd0A joins
21:20:52<imer>kiska: yep, dash, sorry stepped away for a bit
21:20:52<imer>ah, ok, its in MB. can't seem to wrap my head around how to turn a sum into a value per seconds. You'd think just divide by $__interval, but that doesnt seem to work
21:21:28<nicolas17>derivative?
21:21:31Dango360 quits [Read error: Connection reset by peer]
21:22:19<imer>it's not a constantly increasing count, just the momentary byte values
21:22:49<kiska>I refer you to https://docs.influxdata.com/influxdb/v1.8/query_language/
21:23:20<imer>yeah, been looking through the docs with no avail
21:23:31<kiska>Here is how I am doing it for network stats https://grafana.kiska.pw/d/7NfIlyRWz/telegraf-host-metrics?orgId=1&refresh=30s&editPanel=11
21:23:48<nicolas17>ah
21:23:52<nicolas17>that's actually a grafana specific issue
21:24:09<kiska>Hrm?
21:24:17<kiska>Cont.
21:24:54<nicolas17>you're dividing by $__interval which is text like 5s
21:25:01<imer>yep.
21:25:06<nicolas17>use $__interval_ms which is something like 5000
21:25:10<imer>thank you
21:31:49<imer>`SELECT sum("size") / ($__interval_ms / 1000) * 1024 * 1024 FROM "autogen"./^$project$/ WHERE $timeFilter GROUP BY time($__interval) fill(0)` thats what I got now, but that's got the issue of now showing the right value at the end since the windowing period is incomplete still
21:32:13<imer>cumulative sum seems to be super slow, so cant do that with derivative
21:35:53lukash quits [Ping timeout: 252 seconds]
21:39:53<imer>kiska: bytes/s done panel: https://transfer.archivete.am/pYvm2/2023-06-25_23-39-23_mKM0XbfU9u.txt with the caveat of above (but at least correct data)
22:12:44cdub quits [Ping timeout: 252 seconds]
22:15:45cdub joins
22:17:33hitgrr8 quits [Client Quit]
22:30:43<kiska>Its in now
22:33:48<kiska>Oh btw the value is MiB xD
22:33:55<kiska>Tracker idiosyncrasies
22:34:37cdub quits [Client Quit]
22:42:50cdub joins
22:45:07<fireonlive>kiska: oh what type does the tracker use?
22:45:48<nicolas17>I thought the tracker used bytes?
22:48:52<fireonlive>oh! yes it does.
22:49:17<fireonlive>(and displays in -iB too :)
22:50:23<@JAA>(Ever since it annoyed me so much that it used 'MB' to mean MiB that I did a PR for that.)
22:51:02<@JAA>Oh wow, it's been three years.
22:51:26<fireonlive>^_^
22:51:50<fireonlive>time is a bastard in both directions
22:52:06<flashfire42>Time is a bastard and so am I
22:52:30<fireonlive>;)
22:57:50Doranwen quits [Ping timeout: 252 seconds]
23:02:09Doranwen (Doranwen) joins
23:06:18BlueMaxima joins
23:07:33BlueMaxima quits [Read error: Connection reset by peer]
23:07:42BlueMaxima joins
23:51:32byteofwood (byteofwood) joins
23:55:02imer quits [Ping timeout: 252 seconds]
23:55:49imer (imer) joins
23:57:35wickedplayer494 quits [Ping timeout: 265 seconds]