00:08:50<fireonlive>so many!
01:09:48lukash98 quits [Client Quit]
01:27:33lukash98 joins
01:37:20lukash98 quits [Client Quit]
01:44:20etnguyen03 (etnguyen03) joins
01:46:39lukash98 joins
01:51:43<nicolas17>another scary download from samsung
01:51:44<nicolas17>3.31G/3.36G [3:58:12<03:44, 200kB/s]
01:52:12<kiska>Samsung: Enable network interruption! :D
01:55:14<nicolas17>finished, 4:01:20
02:00:57<nicolas17>verifying file with "gzip -t": 0:01:42
02:04:20<nicolas17>update: turns out backblaze documentation is now using a third party "document360" platform
02:05:01<nicolas17>and the tree of documents can't be expanded (doesn't even show icons suggesting there's anything to expand) if JS is off
02:08:35<@JAA>Thanks, I hate it.
02:09:07<nicolas17>letting me expand the tree with JS off seems Hard
02:09:20<nicolas17>but without the icon I didn't even realize I was supposed to be seeing more
02:12:35<imer>what if instead of not working, it was just expanded to begin with? nah, seems too hard to implement
02:14:02<@JAA>It's not hard to do expansions without JS. One typical method is to have a hidden checkbox, wrap the clicky thingy in a <label>, and then toggle `display` via `:checked`.
02:14:53<nicolas17>expanding the item I'm looking at would certainly be a good first step
02:15:11icedice (icedice) joins
02:15:54<nicolas17>wait it seems more broken than before?
02:17:29<nicolas17>oh god they try to remember in a cookie if JS is enabled or not
02:17:41<nicolas17>so if you load the page with JS off, it's somewhat broken
02:17:44<nicolas17>if you enable JS, it works
02:18:17<nicolas17>if you disable JS, it's *more broken than before*, because now JS is off but jsEnabled=true in cookies so it relies on JS in places it previously didn't
02:18:50<nicolas17>JAA: https://www.backblaze.com/docs/cloud-storage-s3-compatible-api
02:19:09<@JAA>lol
02:19:28<DigitalDragons>document360 is all around not great lol
02:29:12<nicolas17>finished! 4:01:20 download from samsung, 26:02 upload to IA
02:32:17<nicolas17>I need to do distributed downloading...
02:32:36<fireonlive>document360…
02:32:53<fireonlive>because hosting some html that describes how how shit works is so goddamn complicated
02:33:00<fireonlive>we need some saas bullshit
02:34:35<nicolas17>wonder if I can hack something up with seesaw
02:37:34<nicolas17>JAA: seesaw/tracker.py references https://github.com/ArchiveTeam/universal-tracker, that's super dead, right?
02:41:10<@OrIdow6>fireonlive: I too am naive to the process that produces so many documentation saass - I assume that it's just a publicly visible part of bigger thigns
02:41:33<@OrIdow6>Also from document360.com, "AI-powered Suggestions/ChatGPT & OpenAI powered content generators help to find the best title to SEO meta description to increase site traffic."
02:42:17<nicolas17>X_X
02:42:38<kiska>OOFT
02:43:43<nicolas17>I'm starting to get some "There aren't any items available for this project at the moment."
02:44:15<nicolas17>wrong channel
02:44:38<@JAA>nicolas17: It's not what runs on the actual tracker, if that's what you're asking.
02:47:26<nicolas17>it seems somewhat weird that "no items" and "project code out of date" are represented via HTTP status codes :D
02:47:49<@OrIdow6>nicolas17: It's dead but AFAIK it's still API-compatible, just doesn't have backfeed and some other newer stuff
02:47:57<@OrIdow6>Also this is rare for me but this is ot not ot
02:48:47<nicolas17>oops
02:56:25<nicolas17>OrIdow6: sometimes I ramble here about my progress downloading from opensource.samsung, and arguably that's fine in -ot but it can quickly drift to on topic so... :P
03:03:11<fireonlive>ai suggestions and seo haha
03:08:20Lord_Nightmare quits [Quit: ZNC - http://znc.in]
03:12:18Lord_Nightmare (Lord_Nightmare) joins
03:19:27<@JAA>I'm playing with Podman Compose, and it's beautiful. When you get the compose.yml syntax wrong, it doesn't return a parse error, it yeets a straight AttributeError exception up the stack.
03:20:10<@JAA>(For the mistake I made just now, anyway.)
03:20:50<nicolas17>https://apnews.com/article/amazon-nlrb-unconstitutional-union-labor-459331e9b77f5be0e5202c147654993e
03:38:27<@JAA>Skimming that GitLab article...
03:38:34<@JAA>> GitLab uses both timestamp with timezone and timestamp without timezone. My understanding is that the data type timestamp without timezone is used when the system performs an action and data type timestamp with time zone is used for user actions.
03:38:38<@JAA>wat
03:39:04<nicolas17>storing the user's timezone?
03:39:30<@JAA>Sure, but why would you omit the TZ entirely for system actions, rather than use the system's TZ?
03:41:11<nicolas17>from blog post comments, I think it's more about which fields were created by Ruby on Rails before this was moved to a manual .sql schema :P
03:42:10<kiska>nicolas17: The headline is beautiful :D
03:44:29<nicolas17>we got another chonker: 210M/3.12G [16:43<4:33:23, 177kB/s]
03:51:50etnguyen03 quits [Ping timeout: 240 seconds]
04:06:12<fireonlive>UTC fur alles
04:06:16<fireonlive>für
04:06:19<fireonlive>:3
04:19:27etnguyen03 (etnguyen03) joins
04:33:34avatar joins
04:37:02<avatar>Any recommendations for a distributed filesystem which might let me spin down drives on nodes? Pretty sure Ceph and Gluster don't support that.
04:37:13<avatar>Use-case is I've migrated my primary zpool to bigger disks and now have plenty of spare drives, considering optimizing TB/watt of near-cold storage with rPi, existing proxmox nodes etc. Could be a useful once-per-month backup target or general archive storage tier that really doesn't actually need to be spinning 99% of the time. Bonus points if it
04:37:13<avatar>only needs to spin up the one drive with the data for read operations. Suggestions welcome!
04:39:40<nicolas17>you should ask on the ceph IRC channel or something, maybe there is some configuration that lets you do it
04:41:11<nicolas17>or they'll say "if you need that use X instead"
04:41:38<avatar>Yeah, unfortunately Ceph doesn't do it. They've done some brainstorming to work out what might be required, but its still just an idea: https://tracker.ceph.com/projects/ceph/wiki/Towards_Ceph_Cold_Storage
04:42:38<@JAA>I think that might be about actually cold storage, i.e. powered-down nodes.
04:43:44<avatar>This mail seems to indicate that it was discussed at a Ceph summit in 2014/5: https://ceph-users.ceph.narkive.com/GywlkXxC/cold-storage-tuning-ceph
04:44:10<avatar>But at this stage it seems the OSDs are constantly in use, and will force drives to wake up.
04:45:45<avatar>SeaweedFS seems to have the same general issue, volumes are kept open and a user reports the disks spin back up immediately: https://github.com/seaweedfs/seaweedfs/discussions/2671
04:50:32<anarcat>i think in minio, you could do this with a clustser
04:51:13<anarcat>actually, i'm not even sure you need a cluster
04:51:16<anarcat>https://github.com/minio/minio/discussions/16352
04:51:19<anarcat>https://min.io/product/automated-data-tiering-lifecycle-management
04:51:21<anarcat>https://min.io/docs/minio/linux/administration/object-management/object-lifecycle-management.html
04:52:00<@JAA>https://github.com/minio/minio/discussions/17928#discussioncomment-6839709
04:53:00<anarcat>oh
04:53:03<anarcat>so nope?
04:53:09<@JAA>¯\_(ツ)_/¯
04:53:54<@JAA>I have no experience with this, just reading stuff.
04:54:09<@JAA>Definitely curious if you find something though.
04:54:42<avatar>Hmm yeah, looks like MinIO will also keep the disks spinning. Good thought though, I didn't think of that at first.
05:10:11<steering>im told i should post this https://opensource.google/documentation/reference/using/agpl-policy
05:12:35<nicolas17>if you make a license that says "everyone can use this except Google" that's not classified as open source, but AGPL does basically the same (?)
05:13:22<nicolas17>GPLv3 is also a good way to keep Apple away from your code
05:15:51<fireonlive>> it still presents a huge risk to Google because of how integrated much of our code is. The risks heavily outweigh the benefits
05:15:53<fireonlive>lols
05:16:28<nicolas17>yeah similar risk to signing an employment contract with Google with a a non-compete clause (:
05:16:29<fireonlive>someome should integrate something with it on their last day on something super public
05:16:37<fireonlive>>:3
05:17:16<steering>I'm sure people have "accidentally" imported AGPL libs before.
05:17:38<steering>(and given the size of their repo(s) there's not a snowball's chance of getting it out of history surely)
05:17:56<steering>I found it more interesting that you can't even use AGPL software *on your PC* unless it's approved.
05:19:38<avatar>I found GarageFS (https://garagehq.deuxfleurs.fr/documentation/reference-manual/features/) which I've not heard of before, seems fun for homelab. Don't think it'll meet my needs, but thought I'd share.
05:20:29<fireonlive>ah yes the monorepo
05:20:50<fireonlive>oh on your pc toohmm
05:21:18<nicolas17>how do they define "use" or "install"
05:21:37<nicolas17>will googlers get in trouble for running a web app with AGPL JS code? >:3
05:23:06<fireonlive>:3
05:23:24<@JAA>I'm sure no Googler has ever used Mastodon.
05:24:34<fireonlive>not even once! 🚬
05:30:03benjins2 quits [Read error: Connection reset by peer]
05:43:41<nicolas17>https://archive.fart.website/archivebot/viewer/costs ...lol
05:44:04<nicolas17>harsh
05:44:32<DigitalDragons>"The Million Dollar Archiver"
05:49:35<@OrIdow6>Didn't know that thing was still being updated
05:53:28<@OrIdow6>In my favor I have used much more of IA's storage on projects, I should be much higher up the list
05:54:17<nicolas17>I think my samsung bullshit will add up to <1TB :|
05:55:13<fireonlive>wall of honour*
05:55:17<fireonlive>:3
05:55:22<DigitalDragons>Someone needs to do a cost leaderboard for the tracker stats
05:55:47<fireonlive>i haven’t done too bad so far it seems
05:56:23<fireonlive>fire 3313687413661 6027.56
05:56:24<steering>petabytes per person when
05:56:32<fireonlive>i’m trying! :3
05:56:59<fireonlive>Ryz has us all beat tho
05:57:08<nicolas17>ryz has 590931713999941 bytes which is 537TiB
05:57:18<DigitalDragons>halfway!
05:58:42<@OrIdow6>Ryz's dedication to this project is pretty amazing
06:00:25<@OrIdow6>Also play guess the name
06:01:26<Ryz>Yes, there's a lot of archiving involved from me finding all sorts of stuff, a mix of that, ArchiveBot projects, and taking up requests from countless people
06:02:15<Ryz>Been a bit starving for running more of my own jobs since there is a period of time recently that I've been running other peoples' jobs or doing ArchiveBot projects like the Blogspot stuff... :c
06:12:28<fireonlive>Ryz++
06:12:28<eggdrop>[karma] 'Ryz' now has 5 karma!
06:14:16<Ryz>So...many...Blogspot stuff...I had to deal with x_x;
06:14:27<Ryz>The custom stuff to watch for, for a month or two S:
06:21:44etnguyen03 quits [Client Quit]
06:23:01nic0 (nic) joins
06:24:03nic quits [Ping timeout: 272 seconds]
06:24:03nic0 is now known as nic
06:43:50nulldata quits [Ping timeout: 240 seconds]
07:41:38<fireonlive>D:
08:25:37magmaus3 (magmaus3) joins
08:43:23systwi quits [Ping timeout: 272 seconds]
09:01:08systwi (systwi) joins
10:00:01Bleo18260 quits [Client Quit]
10:01:24Bleo18260 joins
10:43:20decky quits [Ping timeout: 240 seconds]
11:37:11decky_e joins
12:18:40IRC2DC joins
12:44:41Arcorann quits [Ping timeout: 272 seconds]
14:07:15jacksonchen666 is now known as RJHacker45462
14:07:19jacksonchen666 (jacksonchen666) joins
14:07:55RJHacker45462 quits [Remote host closed the connection]
14:35:46etnguyen03 (etnguyen03) joins
14:39:04jacksonchen666 quits [Remote host closed the connection]
14:39:30jacksonchen666 (jacksonchen666) joins
14:51:37avatar quits [Remote host closed the connection]
15:04:27jacksonchen666 quits [Remote host closed the connection]
15:04:56jacksonchen666 (jacksonchen666) joins
15:18:33Dango360 quits [Read error: Connection reset by peer]
15:38:46vukky quits [Quit: Ping timeout (120 seconds)]
15:39:09vukky (vukky) joins
15:43:25nulldata (nulldata) joins
16:52:47jacksonchen666 quits [Remote host closed the connection]
16:53:15jacksonchen666 (jacksonchen666) joins
16:58:36jacksonchen666 quits [Remote host closed the connection]
16:59:06jacksonchen666 (jacksonchen666) joins
17:38:58aaaaaaz joins
17:39:24aaaaaaz quits [Remote host closed the connection]
18:15:20etnguyen03 quits [Ping timeout: 240 seconds]
18:26:50midou quits [Ping timeout: 240 seconds]
18:34:17DLoader quits [Ping timeout: 272 seconds]
18:36:18magmaus3 quits [Client Quit]
18:38:06magmaus3 (magmaus3) joins
18:39:27<magmaus3>my server is back up :3
18:42:57midou joins
18:46:14DLoader (DLoader) joins
18:51:40<fireonlive>:3
18:51:51<fireonlive>https://x.com/jimbrowning11/status/1759289486054170886?s=12 < this guy seems to get a lot of reports lol
18:51:52<eggdrop>nitter: https://farside.link/nitter/jimbrowning11/status/1759289486054170886
19:08:20nic quits [Ping timeout: 240 seconds]
19:11:20nic (nic) joins
19:39:31wickedplayer494 quits [Ping timeout: 272 seconds]
19:40:10Chris5010 quits [Remote host closed the connection]
19:40:11wickedplayer494 joins
19:50:17DLoader quits [Ping timeout: 272 seconds]
20:03:53Aoede quits [Client Quit]
20:08:03Aoede (Aoede) joins
20:10:04ave quits [Quit: Ping timeout (120 seconds)]
20:10:04igloo22225 quits [Quit: Ping timeout (120 seconds)]
20:12:01DLoader (DLoader) joins
20:13:50Aoede quits [Ping timeout: 240 seconds]
20:16:48<fireonlive>i bought league of legends flavoured coke
20:16:52<fireonlive>the economy is healing
20:16:55<fireonlive>❤️‍🩹
20:17:11<fireonlive>“+XF flavoured” whatever the fuck that means
20:19:28<ymgve>you leveled up your taste
20:20:08<nukke>oh god you're gonna start randomly spitting out racial slurs and griefing everyone around you
20:24:56<ymgve>gamer word fuel
20:47:32Aoede (Aoede) joins
20:49:15<fireonlive>oh no
20:49:50<flashfire42>I mean we had coke 3000 frozen cokes at maccas for a little while. They tasted like some berry thing
20:50:51<fireonlive>it does kinda have a fruity aftertaste after the initial vanilla
21:23:50<nukke>sunday humor: https://www.reddit.com/r/github/comments/1at9br4/i_am_new_to_github_and_i_have_lots_to_say/
21:49:43<myself>I mean, they're not wrong. The experience of trying to find something you can just run, figuring out it's called "releases", but the releases tab doesn't appear on some projects because apparently some projects don't ... release? what even is that? Having people tell you "oh just go here and run this" and when you get there apparently you have to
21:49:43<myself>draw the rest of the owl.
21:51:53<Barto>agreed on the fact that when you look for releases, it's no the first thing you can find, and more often than not, you end with the source code and not the binary (if there is one).
21:58:59<fireonlive>'no sherlock for you!'
21:58:59<fireonlive>*soup bag is grabbed out of hand*
22:26:42driib quits [Quit: The Lounge - https://thelounge.chat]
22:32:25wickedplayer494 quits [Ping timeout: 272 seconds]
22:32:37wickedplayer494 joins
22:33:49<fireonlive>"archive" "team" not "archive team" https://github.com/microsoftgraph/microsoft-graph-docs-contrib/blob/main/api-reference/v1.0/includes/snippets/csharp/archive-team-csharp-snippets.md
22:33:50<fireonlive>:p
22:34:13<Barto>nukke | is gogs/gitea/forgejo still a single go binary? > answering for forgejo, yes, too
22:34:27<fireonlive>ah nice
22:34:33<fireonlive>Barto: you trying it out?
22:34:43<Barto>i just nuked gitea
22:34:53<fireonlive>:o
22:34:55<Barto>running forgejo at home now
22:34:58<fireonlive>nice
22:35:19<Barto>unfortunately there's no migration guide, but i'll need to rename the db name in mariadb
22:36:03<Barto>and there's some some slight chance i broke the gitea actions, who cares
22:36:05<fireonlive>oh odd, you'd think someone would have come up with one by now
22:36:14<fireonlive>i think they have their own runner now?
22:36:36<Barto>i just didn't try that part, and i will not try it this evening
22:37:10<@JAA>There's https://forgejo.org/docs/latest/admin/installation/#migration-from-gitea but not too much there.
22:37:40<Barto>yeah, kind of what i figured :D
22:37:41<@JAA>And only for Arch.
22:37:47<Barto>i went smarter with the config since i diffed-them
22:38:11<Barto>also be careful, if you edited the systemd service file, dont forget to redo it :-)
22:38:29<Barto>damn you ReadWritePaths
22:38:42<@JAA>An alternative might be to do an export from Gitea and then import that into Forgejo.
22:39:14<@JAA>`gitea dump`
22:42:06<Barto>also, looks like i'll need to double check some stuffs in my nginx config for forgejo
22:43:40<Barto>but the thing in their guide is that since it changes the user running the daemon, i'll need to update all my remote ref to specify the right ssh user
22:45:19<@JAA>Right
22:47:58Barto commits his config change, proceed to not run git remote set-url before git push
22:49:26fireonlive makes a note to run it as `git@`
22:50:06Barto definitely should update his config for this
22:52:00<Barto>also if you have a whitelist of ssh users, dont forget to update this too ;-)
22:56:24<fireonlive>:3
22:56:33<fireonlive>many little things
22:59:07<Barto>that's freaking why i have a git with all this, a small grep gitea and i know what i changed :-)
22:59:44<Barto>i could use some more complex tools (ansible, dotdrop - well that for dotfiles), but heck scp is fine :-)
23:00:09fireonlive terraforms Barto
23:00:16fireonlive kubernetes Barto
23:00:18<fireonlive>:p
23:00:50<Barto>everything is bare metal on my nas, i do enough kubernetes/openshift at work, i know some terraform will come *soon*
23:03:07<fireonlive>:)
23:03:28<fireonlive>the 'zen garden' of servers :p
23:04:45<Barto>i would definitely not install the way i do on my nas, but heck i do whatever i want
23:14:17<fireonlive>https://jvns.ca/blog/2024/02/16/popular-git-config-options/ https://news.ycombinator.com/item?id=39400352
23:17:22<fireonlive>spice up ur git
23:26:54<fireonlive>https://x.com/frantzfries/status/1758967011114148155?s=12
23:26:55<eggdrop>nitter: https://farside.link/nitter/frantzfries/status/1758967011114148155
23:26:59<fireonlive>so how long will this last
23:35:12jacksonchen666 quits [Ping timeout: 255 seconds]
23:59:56<fireonlive>ᓚᘏᗢ