00:16:47Wohlstand quits [Client Quit]
00:38:05etnguyen03 (etnguyen03) joins
00:40:53Exorcism0666 quits [Quit: Ping timeout (120 seconds)]
00:41:05Exorcism0666 (exorcism) joins
00:41:11DigitalDragons quits [Client Quit]
00:41:28DigitalDragons (DigitalDragons) joins
00:53:11etnguyen03 quits [Client Quit]
01:03:36yasomi is now known as Xe
01:04:12<Xe>I had a thought regarding archiveteam and anubis: web bot auth: https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/
01:05:06etnguyen03 (etnguyen03) joins
01:14:38Juest quits [Ping timeout: 276 seconds]
01:19:40nexusxe2 joins
01:23:06Juest (Juest) joins
01:23:23<nexusxe2>what stage of the process requires .warc files to be gzipped? a .warc i was moving from a remote server to my local machine went from a 975MiB .warc.gz to a 11.2MiB .warc.zst ... I know that zstandard is slower but for that kind of file size difference I would imagine it would be worth it?
01:29:36<pokechu22>I don't think anything specific requires .gz, but note that each record in the .warc.gz is compressed separately, rather than the whole warc being compressed on its own, so that it can seek to individual records
01:29:51<pokechu22>for archivebot, it's mostly legacy reasons with no real updates having been made to wpull in a while
01:37:29<nexusxe2>i assume the per-record compression is done for a reason, then?
01:43:37<pokechu22>Yeah, since web.archive.org needs to grab specific records from it and if it had to decompress the whole WARC every time that'd be a problem
01:45:52dabs quits [Read error: Connection reset by peer]
02:19:45etnguyen03 quits [Remote host closed the connection]
02:47:38nexusxe2 quits [Remote host closed the connection]
03:13:09lemuria quits [Read error: Connection reset by peer]
03:13:38lemuria (lemuria) joins
03:30:22<@OrIdow6>Xe: (And I have become the peanut gallery of this channel) I hate it's necessary but I do think it's the pragmatic option
03:31:23<@OrIdow6>Probably not for everything we do, just some tools
03:32:16<@OrIdow6>Also, I'm not sure Cloudflare would let us in haha
03:32:28<@OrIdow6>The vibe Anubis gives me makes me think they might be more amenable to it
03:32:46<Xe>I have good news about who you're talking to then :)
03:36:44<Xe>$ dig TXT +short hackint-proof.xeiaso.net
03:43:46<alexshpilkin>Xe: last I checked Cloudflare's stance on authorizing smaller bots (for e.g. indie search engines) was "piss off"
03:44:23<alexshpilkin>not that I'm suggesting that you would take such a stance
03:45:29<alexshpilkin>but I'm having a bit of a problem reading their docs in a sympathetic light because of that
03:47:37<BlankEclair>i assume that, wrt cloudflare, we're talking about <https://developers.cloudflare.com/bots/concepts/bot/verified-bots/>?
03:48:36<anarcat>uh
03:48:39<anarcat>Xe: hi. :)
03:48:43<anarcat>thanks for anubis
03:49:04Webuser642545 joins
03:49:09<anarcat>we (torproject.org) haven't deployed it yet, but i feel it's just going to happen soon
03:49:23<anarcat>it's amazing the amount of garbage we get thrown our way
03:51:08<Xe>you're welcome <3
03:51:12<Xe>I'd say I can't imagine but
03:51:12<BlankEclair>think there are ai scrapers on the onion service?
03:51:14<Xe>I can
03:51:18<BlankEclair>or, well, even just onion services in general?
03:52:44bladem quits [Ping timeout: 260 seconds]
03:53:32<Xe>I'd imagine not
03:53:47<Xe>v2 maybe because they were easy to sniff out at the relay level
03:53:54<Xe>v3 fixed a lot of that
03:54:32<anarcat>BlankEclair: not that i know of in particular
03:54:39<BlankEclair>ah oki :3
03:54:50<anarcat>i mean there's fundamentally no reason why the bots wouldn't be able to use the .onions
03:54:59<BlankEclair>just that no one has bothered to do it :p
03:55:01<anarcat>they're publicly announced and not for hiding the serveers
03:55:13<anarcat>well it doesn't change anything, it would just make their crawls slower
03:55:17<anarcat>which, in fact, would be great
03:55:25<BlankEclair>oh yeah, forgot that tor is slow lol
03:55:29<anarcat>we've thought of shutting down gitlab behind a .onion to give them a big fuck off
03:55:30<BlankEclair><-- probably should use tor more often
03:56:01<anarcat>i mean "slow", it's pretty fast, but there's a cost to building a new circuit, and i bet a bot writer would do it the wrong way and waste a lot of cycles
04:02:22Webuser642545 quits [Client Quit]
04:02:53<Xe>BlankEclair: i used to do IPv6 over tor hidden services
04:02:56<Xe>you think you know slow
04:03:02<BlankEclair>my apologies
04:03:05<Xe>try TCP in TCP in onionland
04:03:15<BlankEclair>okay, jesus fuck
04:03:23<BlankEclair>i've done tcp over tcp before, but not w/ tor
04:04:03nine quits [Quit: See ya!]
04:04:16nine joins
04:04:16nine quits [Changing host]
04:04:16nine (nine) joins
04:22:13Exorcism0666 quits [Quit: Ping timeout (120 seconds)]
04:22:27Exorcism0666 (exorcism) joins
04:22:33DigitalDragons quits [Quit: Ping timeout (120 seconds)]
04:22:51DigitalDragons (DigitalDragons) joins
04:24:00<@OrIdow6>https://developers.cloudflare.com/bots/concepts/bot/verified-bots/ip-validation/ seems like it'd be difficult, only options are a static list and reverse DNS
04:24:35<@OrIdow6>Could not be used with distributed stuff
04:24:46nexusxe quits [Quit: Leaving]
04:27:14<Xe>I'm thinking web bot auth in particular
04:27:21<Xe>the signatures for HTTP requests
04:50:36<nicolas17>Xe: the request signatures would end up in the archived data
04:50:49<nicolas17>I don't immediately see a big problem with that since they expire and can't be replayed
04:50:54<nicolas17>but... something to keep in mind
04:51:30<nicolas17>WARCs preserve all HTTP request and response headers
04:58:37beardicus quits [Read error: Connection reset by peer]
04:58:49beardicus (beardicus) joins
05:05:38DogsRNice quits [Read error: Connection reset by peer]
05:51:10awauwa (awauwa) joins
06:18:02Webuser278176 joins
06:18:11Webuser278176 quits [Client Quit]
06:18:11Irenes quits [Ping timeout: 276 seconds]
06:34:34kedihacker joins
07:07:50<@arkiver>Xe: thanks for hopping on here again :)
07:12:26<@arkiver>quite some time ago i proposed something for use with anubis and possible cloudflare and other services
07:12:53<@arkiver>cloudflare for example has the "good bots" list (maybe they changed the name now, not sure), which i believe is mostly IP based.
07:13:11<@arkiver>for Archive Team that would not work of course, due to its distributed workings
07:14:00<@arkiver>something i though of was to create a key/string on the tracker and send that to anubis/cloudflare upon archiving with the note that up to x URLs will be archived in the coming y minutes.
07:14:52<@arkiver>then for the coming x URL and/or y minutes, requests with that key in the HTTP headers will be treated as "good bot" requests, and after the x URL and y minutes, the IP is treated as "bad/normal bot" again.
07:22:40<BlankEclair>i assume this is for dpos projects, right?
07:23:05ATinySpaceMarine quits [Quit: No Ping reply in 180 seconds.]
07:23:46<that_lurker>arkiver: AB pipelines (maybe not all) could be added to the good bots list at least.
07:24:09<BlankEclair>this might be basic, but what if the tracker offered the clients (forgot the exact parlance) is given a signature of the client's ip + url + an expiry time
07:24:23<BlankEclair>then that sig can be given in a header for anubis to know that the request was given by the tracker
07:24:24ATinySpaceMarine joins
07:24:34lemuria quits [Read error: Connection reset by peer]
07:24:59lemuria (lemuria) joins
07:26:44<@arkiver>that_lurker: yeah those could be actually
07:26:51<@arkiver>BlankEclair: yes this was for Warrior projects
07:27:36<@arkiver>BlankEclair: the exact URL could be problematic is the project might archive more than just the URLs given by the tracker
07:27:48<@arkiver>which is the reason for the number of minutes and number of URLs
07:27:52<BlankEclair>ah okay
07:35:33VickoSaviour joins
07:48:24Juesto (Juest) joins
07:48:32Juest quits [Ping timeout: 276 seconds]
07:49:25Juesto is now known as Juest
08:02:50HP_Archivist quits [Ping timeout: 276 seconds]
08:20:28Webuser643332 joins
08:25:18<Webuser643332>Greetings! Can someone helping me archive this YouTube video? https://www.youtube.com/watch?v=oXFCQFW77JA Reason: Unlisted. Thank you!
08:30:19<that_lurker>was done in #down-the-tube it seems
08:42:23Dada joins
08:46:27DartRetaliator_ joins
08:48:53Webuser643332 quits [Client Quit]
08:57:32shinon71 quits [Remote host closed the connection]
08:59:37HP_Archivist (HP_Archivist) joins
09:01:19Juest quits [Ping timeout: 260 seconds]
09:10:23shinon71 joins
09:15:17Island quits [Read error: Connection reset by peer]
09:16:41Juest (Juest) joins
09:20:03VickoSaviour quits [Client Quit]
09:20:41Wohlstand (Wohlstand) joins
09:26:24Juest quits [Ping timeout: 260 seconds]
09:28:35Webuser484752 joins
09:34:00<h2ibot>Hans5958 created Microsoft Update (+3904, Add initial page): https://wiki.archiveteam.org/?title=Microsoft%20Update
09:34:01<h2ibot>Hans5958 edited Microsoft Update (-28): https://wiki.archiveteam.org/?diff=56471&oldid=56470
09:34:02<h2ibot>Hans5958 edited Microsoft Update (+34): https://wiki.archiveteam.org/?diff=56472&oldid=56471
09:35:27shinon71 quits [Read error: Connection reset by peer]
09:35:35shinon71 joins
09:38:01<h2ibot>Hans5958 edited Glitch (+17, Clarify that it is inaccessible (Glitch is…): https://wiki.archiveteam.org/?diff=56473&oldid=56454
09:39:01<h2ibot>Hans5958 edited Glitch (+5): https://wiki.archiveteam.org/?diff=56474&oldid=56473
09:39:02<h2ibot>Hans5958 edited Main Page/Current Projects (+69, Mention last deadline for Glitch): https://wiki.archiveteam.org/?diff=56475&oldid=56347
09:41:02<h2ibot>Hans5958 edited Main Page/Current Projects (+107, New project: Microsoft Update (assumed to be…): https://wiki.archiveteam.org/?diff=56476&oldid=56475
09:51:33shinon71 quits [Remote host closed the connection]
09:56:10shinon71 joins
09:59:06<h2ibot>TriangleDemon edited Internet Archive (+130, /* Problems */): https://wiki.archiveteam.org/?diff=56477&oldid=56436
09:59:07<h2ibot>TriangleDemon edited Foro 3DJuegos (-5): https://wiki.archiveteam.org/?diff=56478&oldid=54366
09:59:08<h2ibot>TriangleDemon edited Nhentai (-6): https://wiki.archiveteam.org/?diff=56479&oldid=53565
09:59:09<h2ibot>TriangleDemon edited WeVidi (+32): https://wiki.archiveteam.org/?diff=56480&oldid=55155
09:59:10<h2ibot>TriangleDemon edited List of websites excluded from the Wayback Machine/Partial exclusions (+53): https://wiki.archiveteam.org/?diff=56481&oldid=55793
09:59:11<h2ibot>TriangleDemon edited List of websites excluded from the Wayback Machine (+105): https://wiki.archiveteam.org/?diff=56482&oldid=56440
10:00:06<h2ibot>Arkiver changed the user rights of User:TriangleDemon
10:00:07<h2ibot>Hans5958 edited Main Page/Current Projects (+0, Move Microsoft Update to long-term): https://wiki.archiveteam.org/?diff=56483&oldid=56476
10:01:06<h2ibot>Hans5958 edited Microsoft Update (-6): https://wiki.archiveteam.org/?diff=56484&oldid=56472
10:02:34HP_Archivist quits [Ping timeout: 260 seconds]
10:22:18Webuser484752 quits [Client Quit]
10:48:16Webuser871853 joins
10:48:28Webuser871853 quits [Client Quit]
11:07:08lemuria quits [Read error: Connection reset by peer]
11:07:51lemuria (lemuria) joins
11:11:24midou quits [Ping timeout: 260 seconds]
11:21:26midou joins
11:39:11Exorcism0666 quits [Quit: Ping timeout (120 seconds)]
11:39:17DigitalDragons quits [Quit: Ping timeout (120 seconds)]
11:39:26Exorcism0666 (exorcism) joins
11:39:32DigitalDragons (DigitalDragons) joins
11:46:13<@OrIdow6>arkiver: What I was thinking was that it'd be nice to have a "component name" (the rfc's options of what can be signed) to identify the IP address requests will be coming from
11:46:32<@OrIdow6>h shoot I see BlankEclair already said that
11:50:37<@OrIdow6>I'm not in the mood to read the whole RFC draft now but it already requires the authority (ie the domain, which seems more flexible than URL) and expiry times
12:13:05Xe quits [Ping timeout: 276 seconds]
12:16:55yasomi (yasomi) joins
12:17:42yasomi is now known as Xe
12:18:42Juest (Juest) joins
12:34:17DigitalDragons quits [Client Quit]
12:34:17Exorcism0666 quits [Client Quit]
12:34:32DigitalDragons (DigitalDragons) joins
12:34:32Exorcism0666 (exorcism) joins
12:43:57Exorcism0666 quits [Client Quit]
12:44:08DigitalDragons quits [Client Quit]
12:44:13Exorcism0666 (exorcism) joins
12:44:24DigitalDragons (DigitalDragons) joins
12:50:31croissant_ joins
12:53:48cuphead2527480 (Cuphead2527480) joins
12:54:02croissant quits [Ping timeout: 276 seconds]
12:55:56sec^nd quits [Remote host closed the connection]
12:56:18sec^nd (second) joins
12:56:25croissant` joins
12:57:31notarobot173 joins
12:59:19croissant_ quits [Ping timeout: 260 seconds]
13:01:50notarobot17 quits [Ping timeout: 276 seconds]
13:01:51notarobot173 is now known as notarobot17
13:09:38camrod636 quits [Ping timeout: 276 seconds]
13:12:48camrod636 (camrod) joins
13:25:22Webuser754303 joins
13:25:25Webuser754303 quits [Client Quit]
13:51:49DartRetaliator_ quits [Ping timeout: 260 seconds]
13:55:45lemuria quits [Read error: Connection reset by peer]
13:56:50lemuria (lemuria) joins
14:06:14<@arkiver>OrIdow6: perhaps yeah, would that be a way for them to only trust an IP temporarily?
14:06:27<@arkiver>it's all about not having to trust the IP completely, but only temporarily
14:06:47<@arkiver>of course some ArchiveBot IPs could simply be trusted, but those used for the Warrior projects cannot
14:20:36@OrIdow6 is now known as @OrIdow6-utm_source-Archiveteam3
14:20:50@OrIdow6-utm_source-Archiveteam3 is now known as @OrIdow6-utm_source-AT3AIRC
14:21:39@OrIdow6-utm_source-AT3AIRC is now known as @OrIdow6-utm_source-AT^3AIRC
14:23:25<@arkiver>OrIdow6-utm_source-AT^3AIRC: you ok there?
14:23:41<@OrIdow6-utm_source-AT^3AIRC>arkiver: Per the conversation in -ot :)
14:23:56<@OrIdow6-utm_source-AT^3AIRC>Wow that looks tacky
14:24:00@OrIdow6-utm_source-AT^3AIRC is now known as @OrIdow6
14:26:16<@arkiver>alright i see
14:26:56<[42]>are there known issues with ips getting listed by spamhaus xbl lately?
14:27:09<@arkiver>[42]: likely certainly if you run the #// project
14:27:20<@arkiver>if you run it at scale at least
14:28:10<@OrIdow6>arkiver: From my read of the RFC draft, the way it works is that with the request you send a litst of "component names", which are things like the domain, path, etc (and also nonce and validity times which are their own thing), then do a signature over this
14:28:48<@OrIdow6>So as this proposal stands it'd let us restrict it to (time range, domain), but I think it'd be nice to have address in there as a field too
14:29:17<@OrIdow6>Since otherwise handing one of these out with a dops is a free ticket to access the site from anywhere
14:29:24<@OrIdow6>Until it expires
14:33:38<@OrIdow6>The counter idea is nice too, maybe instead of the time range per se attach it to the nonce
14:34:21<[42]>arkiver: got 7 ips flagged in the last 2 days after not getting any reports for a long while
14:34:34<@arkiver>[42]: with #// ?
14:34:52<@arkiver>[42]: if you have any details on what they flagged on, please let me know (can PM)
14:35:25<@arkiver>OrIdow6: something like that may be nice yes. i hope to check in with Xe and cloudflare about this (cloudflare in progress)
14:35:47<@OrIdow6>The bad scenario is, I suppose: warrior becomes the golden ticket to bypassing all bot blocking, AI companies modify the warrior code to make up outlinks/redirects/etc to whatever domain they choose and get themselves issued signatures for that, AI companies distribute those signatures to their 10k aws machines and scrape away until the validity period expires
14:36:07<[42]>tinba c&c infra to a sinkhole at 216.218.185.162:80
14:36:10<[42]>probably #//
14:36:11<@arkiver>though... I have not seen significant impact yet of the recent cloudflare changes
14:36:15<@arkiver>[42]: yeah likely :/
14:36:17<@arkiver>#//
14:36:56<@OrIdow6>Probably as long as a particular warrior can't say "give me a signature for this domain please" we're not vulnerable to that
14:37:08<@arkiver>OrIdow6: and let's not give them any ideas :P
14:38:37<@arkiver>have we observed examples of cloudflares recent changes in the wild already?
14:40:07<@OrIdow6>arkiver: Haha fair
14:41:44<[42]>arkiver: i also have some domains linked to that if you're interested? they're all from DGAs though, so probably not that useful
14:41:58<Xe>arkiver: that's why i'm thinking that a single-factor signature would be best
14:42:28<Xe>have the warrior central server send out signatures with its jobs
14:42:34<Xe>use a bank of 16 keys or something
14:42:36<Xe>idk
14:46:12<@arkiver>Xe: yeah and the big difference between anubis and cloudflare would be its centrality. for anubis we would let specific site know in advance of what is coming
14:46:45<@arkiver>which also means we need to restrict a job to those few domains... which is possible.
14:47:00<@arkiver>certainly
14:48:02<@arkiver>Xe: is anubis still useragent based?
14:48:24<Xe>lol no, there's a lot of heuristics
14:48:57<Xe>i'm in the middle of adding TLS fingerprint heuristics
14:50:27<@arkiver>yeah those can be annoying
14:51:41<Xe>my favorite thing I've added though: https://anubis.techaro.lol/docs/admin/configuration/expressions
14:51:46<Xe>expression based heuristics
14:54:57<@arkiver>a simple `wget https://anubis.techaro.lol/` does seem to work
14:55:39<@arkiver>i do see there's a 200 on the bot check page, so will have to handle that in #// - we don't want that to pass for the actual page
14:55:58<@arkiver>Xe: can i assume https://anubis.techaro.lol/ is always up to date with the newest anubis?
14:56:52@arkiver grumbles at the rise of LLMs
15:13:16grill (grill) joins
15:13:43<anarcat>oh wow, load-based checks, that's really nice
15:14:04<anarcat>okay, while we're pounding Xe with off topic stuff, let me add mine :)
15:14:06<anarcat>i feel privileged
15:14:14<@arkiver>certainly not off topic!
15:14:30<anarcat>well maybe not you but i will certainly venture there now :p
15:14:31<@arkiver>archiving could suffer, so i'm happy we're in contact with Xe about this
15:14:39arch quits [Ping timeout: 260 seconds]
15:14:49<@arkiver>good luck (remember #archiveteam-ot for very lengthy stuff though :P )
15:15:01<@arkiver>(i see Xe is there)
15:17:35<anarcat>yeah, i asked there
15:20:13arch (arch) joins
16:00:41cuphead2527480 quits [Quit: Connection closed for inactivity]
16:06:46DigitalDragons quits [Quit: Ping timeout (120 seconds)]
16:07:03DigitalDragons (DigitalDragons) joins
16:16:06midou quits [Remote host closed the connection]
16:16:16midou joins
16:29:11midou quits [Ping timeout: 276 seconds]
16:34:13midou joins
16:39:34<justauser|m>Is the data from 2015 SourceForge crawl available anywhere?
16:39:52<justauser|m>I'm specifically interested in rsync part.
16:46:05grill quits [Ping timeout: 276 seconds]
16:46:38<@arkiver>i believe we did not make a full copy back then justauser|m
16:47:43grill (grill) joins
16:58:34<h2ibot>Cooljeanius edited Microsoft Update (+18, use URL template): https://wiki.archiveteam.org/?diff=56485&oldid=56484
17:06:04grill quits [Ping timeout: 260 seconds]
17:13:25<justauser|m>Wiki confirms your belief, but you did at least something.
17:14:08<justauser|m>A whole 830.45GiB, according to tracker.
17:17:31<@arkiver>yeah i see this one at least https://archive.org/details/archiveteam-fire_20180112095222
17:17:37<@arkiver>but not immediately more
17:21:15grill (grill) joins
17:22:10<@arkiver>i have to hop off now though...
17:22:12<@arkiver>i have to hop off now though...
17:25:23BornOn420 quits [Remote host closed the connection]
17:25:54BornOn420 (BornOn420) joins
17:28:37<egallager>https://ddosecrets.com/article/gaza-volume-02
17:35:27<Xe>arkiver: it's always up to date
17:36:42Exorcism0666 quits [Quit: Ping timeout (120 seconds)]
17:36:43DigitalDragons quits [Client Quit]
17:36:55Exorcism0666 (exorcism) joins
17:36:57DigitalDragons (DigitalDragons) joins
17:43:36HP_Archivist (HP_Archivist) joins
17:46:08cuphead2527480 (Cuphead2527480) joins
17:54:52DigitalDragons quits [Client Quit]
17:54:52Exorcism0666 quits [Client Quit]
17:55:06Exorcism0666 (exorcism) joins
17:55:09DigitalDragons (DigitalDragons) joins
18:22:56chunkynutz60 quits [Ping timeout: 276 seconds]
18:23:35tzt quits [Ping timeout: 276 seconds]
18:24:19tzt (tzt) joins
18:24:30<cuphead2527480>Hey someone here?
18:24:30<cuphead2527480>Im trying to set "noindex" and "true" but keeps saying noindex is an exclusive field. You can only have one value.
18:24:30<cuphead2527480>I dont get it
18:24:41<cuphead2527480>Editing a metadata on AN item
18:26:20dabs joins
18:27:39nexusxe joins
18:27:46<cuphead2527480>Hi dabs
18:30:05lemuria quits [Read error: Connection reset by peer]
18:30:34lemuria (lemuria) joins
18:30:43<dabs>howdy
18:45:47DigitalDragons quits [Client Quit]
18:46:02DigitalDragons (DigitalDragons) joins
19:15:34awauwa quits [Quit: awauwa]
19:16:14tzt quits [Ping timeout: 276 seconds]
19:17:26tzt (tzt) joins
19:21:57Dada quits [Remote host closed the connection]
19:36:33DigitalDragons quits [Client Quit]
19:36:46DigitalDragons (DigitalDragons) joins
19:41:58arch quits [Remote host closed the connection]
19:42:11arch (arch) joins
19:42:20Dada joins
19:45:37Dada quits [Remote host closed the connection]
19:47:39grill quits [Ping timeout: 260 seconds]
19:48:31Dada joins
19:49:24ThreeHM quits [Ping timeout: 260 seconds]
19:51:06ThreeHM (ThreeHeadedMonkey) joins
19:55:14Dango360 quits [Ping timeout: 260 seconds]
19:58:57Dango360 (Dango360) joins
20:06:19Dango360 quits [Ping timeout: 260 seconds]
20:07:32Dango3604 (Dango360) joins
20:11:34Dango3604 is now known as Dango360
20:15:14epoch (epoch) joins
20:25:01nine quits [Quit: See ya!]
20:25:14nine joins
20:25:14nine quits [Changing host]
20:25:14nine (nine) joins
20:26:22Guest58 joins
20:29:11Dada quits [Remote host closed the connection]
20:33:04Island joins
21:16:35Irenes (ireneista) joins
21:53:01Exorcism0666 quits [Quit: Ping timeout (120 seconds)]
21:53:16Exorcism0666 (exorcism) joins
21:53:20DigitalDragons quits [Client Quit]
21:53:37DigitalDragons (DigitalDragons) joins
22:13:15<BlankEclair>https://gamerant.com/steam-removed-games-publishing-rules-update/
22:13:16<BlankEclair>ugh
22:16:58atphoenix__ (atphoenix) joins
22:19:19atphoenix_ quits [Ping timeout: 260 seconds]
22:22:45<nicolas17>BlankEclair: https://64.media.tumblr.com/9163aa74a5eabf8069aef5e95bb71b8e/6910a51bfb2d48b6-64/s540x810/5216ac3d5ce948786a1426768fc1074f64665d94.png
22:23:05<BlankEclair>yeah...
22:24:31Wohlstand quits [Quit: Wohlstand]
22:28:12dabs quits [Read error: Connection reset by peer]
22:42:05lemuria quits [Read error: Connection reset by peer]
22:42:57lemuria (lemuria) joins
22:53:29<nicolas17>the xcode simulators are now showing up on wbm cdx api
23:08:56<h2ibot>Dango360 edited Roblox/uncopylocked (+108, added Nickelodeon Suites Resort by Guest368712): https://wiki.archiveteam.org/?diff=56486&oldid=56403
23:15:11etnguyen03 (etnguyen03) joins
23:23:29NotGLaDOS quits [Ping timeout: 260 seconds]
23:37:49NotGLaDOS joins
23:39:24etnguyen03 quits [Client Quit]
23:39:45etnguyen03 (etnguyen03) joins
23:49:30etnguyen03 quits [Client Quit]
23:58:38etnguyen03 (etnguyen03) joins