00:13:14mutantmonkey quits [Remote host closed the connection]
00:13:33mutantmonkey (mutantmonkey) joins
00:21:58Arcorann (Arcorann) joins
00:35:03mutantmonkey quits [Remote host closed the connection]
00:35:03sec^nd quits [Remote host closed the connection]
00:35:42sec^nd (second) joins
00:35:55mutantmonkey (mutantmonkey) joins
00:37:05igloo22225 quits [Quit: The Lounge - https://thelounge.chat]
00:41:25igloo22225 (igloo22225) joins
00:49:42Wingy1139793760 quits [Ping timeout: 246 seconds]
01:00:03HackMii_ quits [Remote host closed the connection]
01:00:34HackMii_ (hacktheplanet) joins
01:51:22mutantmonkey quits [Remote host closed the connection]
01:51:57mutantmonkey (mutantmonkey) joins
02:12:11mutantmonkey quits [Remote host closed the connection]
02:12:36mutantmonkey (mutantmonkey) joins
02:25:06Wayward (wayward) joins
02:27:55mutantmonkey quits [Remote host closed the connection]
02:28:32mutantmonkey (mutantmonkey) joins
02:45:54Wingy1139793760 (Wingy) joins
03:04:41Wingy1139793760 quits [Read error: Connection reset by peer]
03:05:32Wingy1139793760 (Wingy) joins
03:13:30Iki joins
03:18:11Wingy1139793760 quits [Ping timeout: 240 seconds]
03:24:17Wingy1139793760 (Wingy) joins
03:54:33Arcorann quits [Ping timeout: 265 seconds]
04:30:17HackMii_ quits [Remote host closed the connection]
04:31:16HackMii_ (hacktheplanet) joins
04:41:24HackMii_ quits [Remote host closed the connection]
04:41:52HackMii_ (hacktheplanet) joins
05:18:26<systwi>https://www.animenewsnetwork.com/news/2022-07-07/yu-gi-oh-manga-creator-kazuki-takahashi-passes-away-at-60/.187468
05:18:53<systwi>Yi-Gi-Oh! manga creator Kazuki Takahashi passed away recently.
05:30:52HackMii_ quits [Remote host closed the connection]
05:31:12HackMii_ (hacktheplanet) joins
05:43:10wyatt8750 joins
05:43:47wyatt8740 quits [Ping timeout: 265 seconds]
05:56:20DiscantX joins
05:57:45HackMii_ quits [Remote host closed the connection]
05:57:45sec^nd quits [Remote host closed the connection]
05:58:18HackMii_ (hacktheplanet) joins
05:58:28sec^nd (second) joins
06:04:43Arcorann (Arcorann) joins
06:32:36wickedplayer494 quits [Ping timeout: 265 seconds]
06:36:12wickedplayer494 joins
06:47:11march_happy quits [Read error: Connection reset by peer]
06:48:19march_happy (march_happy) joins
06:51:01sec^nd quits [Remote host closed the connection]
06:51:01HackMii_ quits [Write error: Broken pipe]
06:51:20HackMii_ (hacktheplanet) joins
06:54:04sec^nd (second) joins
07:09:18HackMii_ quits [Remote host closed the connection]
07:10:42HackMii_ (hacktheplanet) joins
07:21:44HackMii_ quits [Remote host closed the connection]
07:22:01HackMii_ (hacktheplanet) joins
07:31:41mutantmonkey quits [Ping timeout: 240 seconds]
07:34:01gazorpazorp quits [Read error: Connection reset by peer]
07:34:31mutantmonkey (mutantmonkey) joins
07:42:07mutantmonkey quits [Remote host closed the connection]
07:42:55mutantmonkey (mutantmonkey) joins
07:57:08BlueMaxima quits [Read error: Connection reset by peer]
07:58:41Wingy1139793760 quits [Ping timeout: 240 seconds]
08:14:41HackMii_ quits [Remote host closed the connection]
08:15:07HackMii_ (hacktheplanet) joins
08:49:21Arcorann quits [Read error: Connection reset by peer]
08:55:34Arcorann (Arcorann) joins
09:19:07DiscantX quits [Read error: Connection reset by peer]
09:23:13DiscantX joins
09:44:11HackMii_ quits [Remote host closed the connection]
09:44:11sec^nd quits [Remote host closed the connection]
09:44:39HackMii_ (hacktheplanet) joins
09:44:40sec^nd (second) joins
10:12:11sec^nd quits [Ping timeout: 240 seconds]
10:17:35sec^nd (second) joins
10:22:41qw3rty quits [Ping timeout: 240 seconds]
10:32:51tech_exorcist (tech_exorcist) joins
10:47:41sec^nd quits [Ping timeout: 240 seconds]
10:50:08Wingy1139793760 (Wingy) joins
10:52:28sec^nd (second) joins
11:16:30gazorpazorp (gazorpazorp) joins
11:17:41tech_exorcist quits [Ping timeout: 240 seconds]
11:26:57march_happy quits [Ping timeout: 265 seconds]
11:38:41DiscantX quits [Ping timeout: 240 seconds]
11:39:05mutantmonkey quits [Remote host closed the connection]
11:39:05sec^nd quits [Remote host closed the connection]
11:39:06HackMii_ quits [Write error: Connection reset by peer]
11:40:07qw3rty joins
11:43:21mutantmonkey (mutantmonkey) joins
11:44:48sec^nd (second) joins
11:46:50HackMii_ (hacktheplanet) joins
12:17:11sec^nd quits [Ping timeout: 240 seconds]
12:28:24sec^nd (second) joins
12:52:45George joins
12:53:38<George>Good afternoon. I'd like to know if there are any tools that will allow me to archive a website that has no directory listings.
12:55:18<Maakuth|m>are there web pages linking to the content?
12:56:11<Maakuth|m>there's no other way than links (either from dir listings or other sort of pages) to know what files are there
12:58:13<Maakuth|m>https://www.guyrutenberg.com/2014/05/02/make-offline-mirror-of-a-site-using-wget/ here's how you can do it with wget. if you want to store it in IA, some folks in this channel could point archiveteam tools to fetch the page
12:58:46<Maakuth|m>but if it's some random filenames in some url and there's no listing
13:01:09<George>So, there are pages on the web archive that used to link to specific files on that server, however since the scripting for those pages doesn't work, it's hard to find the original. Brute-forcing is possible since most filenames follow a pattern, but I'd like to avoid that.
13:01:50mutantmonkey quits [Remote host closed the connection]
13:01:50sec^nd quits [Remote host closed the connection]
13:01:51HackMii_ quits [Write error: Broken pipe]
13:02:12mutantmonkey (mutantmonkey) joins
13:02:17<George>To be more specific, I'm trying to archive the PopCap file server. It is still live, but there's no telling when EA will shut it down, and that's PopCap history.
13:02:22HackMii_ (hacktheplanet) joins
13:02:29<Maakuth|m>if you can stay on this channel until the archivists in american timezones wake up, they might be able to help
13:02:33sec^nd (second) joins
13:03:02<George>I'll keep the page open. Thank you.
13:04:16<George>Maakuth|m: Can't use wget, since there are no html pages, as far as I can tell, so no way to know the hierarchy.
13:31:52benjinsmith is now known as benjins
13:36:33qw3rty quits [Ping timeout: 246 seconds]
13:44:38<thuban>George: can you explain (specifically) what you mean by "there are pages on the web archive that used to link to specific files on that server, however since the scripting for those pages doesn't work, it's hard to find the original"?
13:45:23qw3rty joins
13:46:21<George>The download button's url is "#", and I don't what script would originally run to generate those links.
13:48:40<thuban>depending on the architecture of the original site, it may or may not be possible to recover those urls from the archived version. if not (and if there are no other sources of listings), you'll have to settle for brute-force enumeration.
13:48:45<thuban>can you link to the archived page(s)?
13:56:37spirit joins
13:56:54<George>OK, presumably somewhere here should be a link to a file called "PlantsVsZombiesSetup_20120812.exe", which is the latest build, and presumably (based on archival date) the page should link to this file.
13:57:02<George>http://web.archive.org/web/20121101034342/http://www.popcap.com/games/plants-vs-zombies/pc?mid=view_id%5Blist+popular%5D_page_id%5B1%5D_lang%5Ben%5D
13:57:06<George>http://web.archive.org/web/20120903102403/http://www.popcap.com/games/plants-vs-zombies/pc?mid=view_id%5Blist+popular%5D_page_id%5B1%5D_lang%5Ben%5D
13:58:24<George>Full link: http://static-www.ec.popcap.com/binaries/popcap_downloads/PlantsVsZombiesSetup_20120812.exe
13:59:02<George>Note that while, popcap.com is dead, static-www.ec.popcap.com is still live.
14:01:13<TheTechRobo>George: Where's the download button on that web.archive.org link?
14:02:19<TheTechRobo>George: You can also see what pages on static-www.ec.popcap.com are already archived by the Wayback Machine. http://web.archive.org/web/*/static-www.ec.popcap.com/*
14:02:55<thuban>TheTechRobo: it appears that it would normally be below the summary (#cta), but is hidden due to lack of javascript
14:03:57<George>That's the pages from 2012, which is when most PopCap games got updated, do not seem to have a download button, there's just an empty space where previous copies had that button
14:04:34<George>Here's a grab with a download button, but this will link to an earlier version, which is already archived
14:04:34<George>http://web.archive.org/web/20120204200735/http://www.popcap.com/games/plants-vs-zombies/pc?mid=view_id%5Blist+popular%5D_page_id%5B1%5D_lang%5Ben%5D
14:05:11<George>TheTechRobo: static-www.ec.popcap.com on web archive is not a complete grab.
14:05:37<TheTechRobo>George: I know, but it might help you find example URLs or edge cases or something.
14:08:06<TheTechRobo>Here's the beautified version of some JS that references try_btn (the link with the href of #): https://pastebin.com/1FfUcbb9
14:08:23<George>TheTechRobo: Oh, yeah that's where I got the filename patterns in the first place, the problem here is the latest versions updated in 2011-2012, where they changed the pattern from "GameNameSetup.exe" to "GameNameSetup_Builddate.exe", which means the only way is to try all urls for games for date range 01012011 to 12312012....
14:09:00<TheTechRobo>ah
14:10:19Wingy1139793760 quits [Ping timeout: 265 seconds]
14:10:37Iki1 joins
14:12:04<George>And that's just the installers, there's also some PDF files (and who knows what else) I'd be interested in archiving. I was wondering if there are any tolls I can use to "analyze" static-www.ec.popcap.com to get the full structure, and get a full backup of the website.
14:12:13AnotherIki joins
14:12:21<George>*tools
14:12:44Iki quits [Ping timeout: 265 seconds]
14:15:09Iki1 quits [Ping timeout: 265 seconds]
14:15:30<thuban>George: based on reading the code (the <script> within the #cta div and popcap_oid.js), the url inserted into the download link would have been extracted from a json api response accessed at popcap.com/popcap_oid/190/11950/trynow/US, but that url was not archived and is now dead
14:16:51<thuban>in the general case, no, it is not possible to enumerate all the files on a remote web server. the best you can do is brute force and educated guessing
14:17:08<@OrIdow6>How recently did the non-"ec" site shut down?
14:17:37<thuban>in this case you've got quite a restructed pattern; why not go for it?
14:17:43<thuban>*restricted
14:17:46<@OrIdow6>Yeah, only a few thousand to try
14:19:09<George>Not sure when it shut down. According to Wikipedia PopCap Studio itself got shut down sometime in 2012, which checks out considering after that it was console and mobile ports galore for PopCap games, don't know how much longer the website survived, though.
14:20:36<@OrIdow6>Oh, obviously that decreases the chance that something that can be used list them all is still up
14:21:03<George>I'll try to come up with a list then feed that list to wget. Would you say 1 minute is a "non-offensive" interval for a server? That'll take me about a week.
14:34:05<thuban>George: honestly, i doubt anybody's looking very hard. if you just use an http HEAD request you can probably get away with little or no delay.
14:35:07<thuban>i just checked plants vs. zombies for all of 2011 and 2012 and only found the one build; was there a list of other games you wanted to try?
14:35:32sec^nd quits [Remote host closed the connection]
14:35:32mutantmonkey quits [Remote host closed the connection]
14:35:33HackMii_ quits [Write error: Broken pipe]
14:36:11mutantmonkey (mutantmonkey) joins
14:36:23HackMii_ (hacktheplanet) joins
14:37:48<George>Due to the popularity of PvZ, it is known that 20120812 is the latest build. It weird that you only found one build, since there's also PlantsVsZombies_20120801.exe and even some out of pattern ones like PlantsVsZombiesSetup_20110727_2_2.exe PlantsVsZombies_20110922_EN_3_1.exe
14:37:57HackMii_ quits [Remote host closed the connection]
14:38:02sec^nd (second) joins
14:38:16HackMii_ (hacktheplanet) joins
14:39:50<George>So there's no way of telling what other games are named. There were ~40 games on the PopCap page, before they were moved to Pogo and eventually Origin. Interestingly enough, the version of PvZ you get on Origin is not the latest. But but is also true for a lot of other Classic titles on Origin.
14:40:58<thuban>http://static-www.ec.popcap.com/binaries/popcap_downloads/PlantsVsZombiesSetup_20120801.exe 404s. i did not, of course, attempt out-of-pattern files, although the brute-force could be extended to them as well.
14:41:19<George>There's also this on archive.org. I don't really know how the uploader got the filenames for most of these I am also not sure if this is the full set. He seems offline on pretty much every platform.
14:41:19<George>https://archive.org/details/PopCap-Games
14:42:18geezabiscuit quits [Read error: Connection reset by peer]
14:42:30geezabiscuit (geezabiscuit) joins
14:43:42<George>It's http://static-www.ec.popcap.com/binaries/popcap_downloads/PlantsVsZombies_20120801.exe
14:44:08<George>So yeah, there's sort of a pattern, but they didn't exactly always follow it.
15:04:01<spirit>http://web.archive.org/web/20121212035005/http://static-www.ec.popcap.com/binaries/popcap_downloads/PlantsVsZombiesSetup_20120812.exe ?
15:04:19<spirit>i simply checked http://web.archive.org/web/*/http://static-www.ec.popcap.com/binaries/popcap_downloads/Plants*
15:04:36<spirit>and then chose an old snapshop from 2012 for that url to get the exe
15:08:11Arcorann quits [Ping timeout: 240 seconds]
15:10:48<George>OK, but if I try this http://web.archive.org/web/*/http://static-www.ec.popcap.com/binaries/popcap_downloads/Zuma*
15:11:44sec^nd quits [Remote host closed the connection]
15:12:21HackMii_ quits [Remote host closed the connection]
15:12:21mutantmonkey quits [Remote host closed the connection]
15:13:09<George>Notice, there's no file named "ZumaSetup_1_5.exe", while the upload I linked above does, and I have no clue where the uploader got that filename from. So no, we don't have a full listing on web archive.
15:15:59mutantmonkey (mutantmonkey) joins
15:18:46<George>Also, I used this just to be thorough. !!Warning!! large data.
15:18:47<George>http://web.archive.org/cdx/search/cdx?url=popcap.com/&matchType=domain
15:20:47<@OrIdow6>What do you mean "the upload I linked above does"? Which link was that?
15:21:07<TheTechRobo>OrIdow6: This?
15:21:07<TheTechRobo><George> There's also this on archive.org. I don't really know how the uploader got the filenames for most of these I am also not sure if this is the full set. He seems offline on pretty much every platform.
15:21:07<TheTechRobo><George> https://archive.org/details/PopCap-Games
15:21:16<George>Yes
15:21:26<@OrIdow6>Oh
15:23:05<spirit>ah, files from such uploads rarely end up in the web.archive.org index
15:24:11<George>Yes, I'm just wondering where he even got those filenames from, a lot of those are not on web archive.
15:24:51<@OrIdow6>flashfire42: Know "XXLuigiMario"? Think they would be willing to answer questions about their upload? See above
15:29:27sec^nd (second) joins
15:31:24Wingy1139793760 (Wingy) joins
15:34:36<spirit>if you are on a hunt for old gaming files, also check out the fileplanet, atomicgamer and gamefront archives
15:34:59<spirit>no idea if there are interfaces for the 2 ladder ones but for fileplanet try https://www.quaddicted.com/stuff/fileplanet/fileplanet.php?filename=PlantsVsZombies
15:35:11<spirit>atomicgamer i could check for you on a full local copy i got
15:39:15HackMii_ (hacktheplanet) joins
15:40:33<George>I doubt we'll find these files anywhere else. Specifically the latest builds were uploaded sometime before the studio was shut down. I just doubt they ever made it anywhere other than the main server
15:41:18HackMii_ quits [Remote host closed the connection]
15:41:35HackMii_ (hacktheplanet) joins
15:43:45<thuban>hey, anyone remember https://github.com/curl/curl/issues/8933 ?
15:44:01<thuban>it totally DOES reproduce if the url it http-only
15:45:35<thuban>*is
16:01:32tech_exorcist (tech_exorcist) joins
16:09:20tech_exorcist quits [Remote host closed the connection]
16:10:05tech_exorcist (tech_exorcist) joins
16:13:16Wingy1139793760 quits [Read error: Connection reset by peer]
16:14:10Wingy1139793760 (Wingy) joins
16:22:23thehedgeh0g quits [Remote host closed the connection]
16:22:25evan quits [Remote host closed the connection]
16:22:25shreyasminocha quits [Remote host closed the connection]
16:22:26jamesp quits [Remote host closed the connection]
16:23:05tech_exorcist_ (tech_exorcist) joins
16:24:58tech_exorcist quits [Write error: Broken pipe]
16:26:44evan joins
16:26:47jamesp (jamesp) joins
16:26:48shreyasminocha (shreyasminocha) joins
16:26:50thehedgeh0g (mrHedgehog0) joins
16:29:03tech_exorcist_ quits [Remote host closed the connection]
16:29:22tech_exorcist_ (tech_exorcist) joins
16:32:48Wingy1139793760 quits [Read error: Connection reset by peer]
16:33:46Wingy1139793760 (Wingy) joins
16:34:26thehedgeh0g quits [Remote host closed the connection]
16:34:27jamesp quits [Remote host closed the connection]
16:34:28shreyasminocha quits [Remote host closed the connection]
16:34:28evan quits [Remote host closed the connection]
16:36:12evan joins
16:36:14thehedgeh0g (mrHedgehog0) joins
16:36:15jamesp (jamesp) joins
16:36:15shreyasminocha (shreyasminocha) joins
16:39:16Megame (Megame) joins
16:42:41guest1234 quits [Quit: Leaving]
16:44:01<George>Here's a more-or-less complete pattern list:
16:44:01<George>GameName.exeGameName.dmgGameName_DateRange.exe*GameName_DateRange.dmgGameNameSetup.exeGameNameSetup_DateRange.exe*GameNameSetup_1_1.exeGameNameSetup_1_2.exeGameNameSetup_1_3.exeGameNameSetup_1_4.exeGameNameSetup_1_5.exe**GameNameSetup_2_1.exeGameNameSetup_2_2.exeGameNameSetup_2_3.exeGameNameSetup_2_4.exeGameNameSetup_2_5.exe**GameNameSetup_3_1.exe*
16:44:02<George>*GameNameSetupAX.exeGameNameSetup-en.exeGameNameSetup-fr.exeGameNameSetup-de.exeGameNameSetup-es.exeGameNameSetup-it.exe* Where DateRange is 20110101 to 20121231**More if *_5.exe existsWeird filenames:PlantsVsZombiesSetup_20110727_2_2.exe (PlantsVsZombiesSetup_20110727.exe does not exist)PlantsVsZombies_20110922_EN_3_1.exe
16:44:02<George>(PlantsVsZombies_20110922.exe does not exist)Possibly run these as well?
16:44:09<George>aaahhhhh
16:45:05<George>https://pastebin.com/J01xn5aT
16:46:24<George>Oh, and the server is case-sensitive.
16:47:23<thuban>George: i am already running a slightly broader search.
16:47:38<George>Awesome!
16:47:41<thuban>can you give an example of AX? and what do you mean by "More if *_5.exe exists"?
16:48:08<George>I'll post the gamelist in a moment
16:52:10<George>I know these files exist:BejeweledSetupAX.exeInsaniquariumSetupAX.exeZumaSetupAX.exeMore if *_5.exe exists, meaning if GameNameSetup_1_5.exe exists, try GameNameSetup_1_6.exe GameNameSetup_1_7.exe until we get a 404. Same for GameNameSetup_2_1.exe and GameNameSetup_3_1.exe links.
16:56:55mutantmonkey quits [Remote host closed the connection]
16:57:37<thuban>hm, ok. i'll have a look at those afterwards. (i'm going very fast, but this will take a while.)
16:57:52mutantmonkey (mutantmonkey) joins
17:01:30<George>Here's a list of filenames that I know of
17:01:36<George>Filenames: https://pastebin.com/GZ9basgu
17:01:52<George>Patterns: https://pastebin.com/J01xn5aT
17:12:10<thuban>no bookworm adventures?
17:12:24<George>BWA is bookworm adventures
17:15:00<thuban>oic
17:19:32Wingy1139793760 quits [Read error: Connection reset by peer]
17:19:38<George>Well... crap
17:19:40<George>http://static-www.ec.popcap.com/binaries/popcap_downloads/ZumaSetup_FP.exe
17:20:25Wingy1139793760 (Wingy) joins
17:21:39thehedgeh0g quits [Remote host closed the connection]
17:21:40evan quits [Remote host closed the connection]
17:21:41jamesp quits [Remote host closed the connection]
17:21:44shreyasminocha quits [Remote host closed the connection]
17:22:24evan joins
17:22:27thehedgeh0g (mrHedgehog0) joins
17:22:27jamesp (jamesp) joins
17:22:27shreyasminocha (shreyasminocha) joins
17:27:10<@OrIdow6>What?
17:29:11mutantmonkey quits [Ping timeout: 240 seconds]
17:29:27Wingy1139793760 quits [Read error: Connection reset by peer]
17:30:23Wingy1139793760 (Wingy) joins
17:32:41mutantmonkey (mutantmonkey) joins
17:45:11<thuban>more weird formats
17:47:31<George>thuban: You got files outside of my format list? And "_FP"?
17:49:51<thuban>George: i have no way of knowing of anything i haven't heard about from you. my "slightly broader" search only involves combining different variants.
17:50:55<George>I see
17:55:00<George>If we could do something like GameName(Setup)_[0-9a-zA-Z] for exe and dmg and give it a pretty generous character limit (say) that would probably cover everything. But, yeah... that would also take forever...
18:27:34Megame quits [Client Quit]
18:57:25tzt quits [Ping timeout: 265 seconds]
19:01:42tzt (tzt) joins
19:04:10systwi_ joins
19:30:45HiccupJul_ (HiccupJul) joins
19:31:04<HiccupJul_>would it be possible to scrape a site using archivebot using login credentials?
19:31:17<HiccupJul_>(my own credentials)
19:31:47<HiccupJul_>redump.org is being neglected by its current owner, he might even let the domain expire, and i think it'd be good to scrape the whole site
19:31:53<HiccupJul_>but a lot of it is behind a login wall
19:32:40<HiccupJul_>i mean i'd be happy to just run something locally - archivebot, wget, whatever
19:33:08<HiccupJul_>i've got a working wget command but i'm not sure how to make it download all pages from redump.org including subdomains, but not download external pages
19:40:18HackMii_ quits [Remote host closed the connection]
19:40:51HackMii_ (hacktheplanet) joins
19:50:10<spirit>Domain Name: redump.org
19:50:10<spirit>Updated Date: 2021-07-22T09:26:41Z
19:50:19<spirit>what are you referring to?!
19:50:27<spirit>oh
19:50:46<spirit>wrong year
19:50:51<spirit>still, context?
19:55:20mutantmonkey quits [Remote host closed the connection]
19:55:20sec^nd quits [Remote host closed the connection]
19:55:21HackMii_ quits [Write error: Broken pipe]
19:55:21tech_exorcist_ quits [Write error: Broken pipe]
19:55:46mutantmonkey (mutantmonkey) joins
19:55:50HackMii_ (hacktheplanet) joins
19:56:11Jake quits [Ping timeout: 240 seconds]
19:56:16tech_exorcist_ (tech_exorcist) joins
19:56:33sec^nd (second) joins
19:56:57<HiccupJul_>well
19:57:04<HiccupJul_>its a video game disc preservation project
19:57:41<HiccupJul_>it has information on more than 50,000 discs and the site is used to help people make collections of disc images ("ISOs") of these discs
19:59:28<HiccupJul_>https://www.whois.com/whois/redump.org
20:00:05<HiccupJul_>seems like it will expire at the end of the month. nothing to say the admin won't renew it, but he doesn't even speak to the people that maintain the database, so...
20:01:02<spirit>aye redump rules
20:01:09<spirit>maybe ask about it in their forums?
20:02:13Wingy1139793760 quits [Read error: Connection reset by peer]
20:03:05Wingy1139793760 (Wingy) joins
20:04:35<HiccupJul_>ah well
20:04:46<HiccupJul_>i'm doing this on behalf/in communication with other users
20:05:11<HiccupJul_>the admin is extremely paranoid and uncommunicative, there's no point trying to talk to him
20:05:30<HiccupJul_>but as he's so negligent, its not like he will actually notice or do anything about people scraping the site
20:06:01<HiccupJul_>people are scraping parts of it already, but i want to do a whole-site scrape
20:07:12<spirit>i saw no mention of anything in the forums which seems to be the most reaosnable place?
20:07:50<HiccupJul_>a lot of discussion occurs in the "VGPC" discords, which aren't affiliated but most/many active redump users talk there
20:08:15<HiccupJul_>the admin (irobot) tends to shut down threads that are critical of him
20:08:36<thuban>HiccupJul_: archivebot doesn't save stuff behind login walls, but https://github.com/ArchiveTeam/grab-site is the local equivalent
20:09:14<HiccupJul_>thanks
20:09:43<HiccupJul_>i'll try that instead of wget
20:10:16Wingy1139793760 quits [Read error: Connection reset by peer]
20:11:10Wingy1139793760 (Wingy) joins
20:11:33<spirit>(discord is a fucking information void and must die)
20:11:42<spirit>ouch, good luck then and thanks for considering the archival!
20:13:01<thuban>(if you'd prefer wget, you should be able to get what you want with the --span-hosts and --domains options)
20:14:41Wingy1139793760 quits [Remote host closed the connection]
20:16:48<HiccupJul_>hm
20:17:03<HiccupJul_>grab-site's usage manual says that --no-offsite-links can prevent subdomain crawling...
20:17:25Wingy1139793760 (Wingy) joins
20:18:42<HiccupJul_>but maybe i don't need to use that flag, and grab-site will be sensible and not try and archive the entire web
20:19:47<thuban>uh, hm
20:20:34<TheTechRobo>HiccupJul_: grab-site by default only crawls the offsite links to a depth of 1, i.e. not recursively
20:20:47<HiccupJul_>ah, okay that seems sensible
20:20:51<TheTechRobo>it'll download the offsite pages and requisites
20:20:56<HiccupJul_>great
20:21:04<HiccupJul_>and it won't count subdomains as offsite?
20:21:08<TheTechRobo>you can make it archive the entire web with `--wpull-args=--span-hosts=''` (untested but should work)
20:21:09<thuban>right. but i'm not actually sure whether--
20:21:11<TheTechRobo><HiccupJul_> and it won't count subdomains as offsite?
20:21:17<TheTechRobo>No idea
20:21:25<HiccupJul_>because redump uses a forum and wiki subdomain
20:21:43<TheTechRobo>You can specify multiple start URLs to grab-site
20:21:53<TheTechRobo>If it's only a few domains, just specify all of them
20:21:54<thuban>i _think_ it counts them as onsite
20:22:00<TheTechRobo>thuban: really?
20:22:04<HiccupJul_>yeah i could do that
20:22:10<HiccupJul_>although for all i know, there are other subdomains
20:22:13Wingy1139793760 quits [Read error: Connection reset by peer]
20:22:15<TheTechRobo>To be on the safe side, I'd just specify all of them.
20:22:20<thuban>but that's based on some comments in the docs re tumblr; i'd have to check the wpull code to be sure
20:22:21<HiccupJul_>and no way to know that without crawling, so chicken and egg problem
20:22:33<HiccupJul_>but yeah i will supply all subdomains i know
20:22:39<HiccupJul_>plus a few possibly unlisted pages
20:22:41<TheTechRobo>just explore the site a bit, and see if you find anything else
20:22:59<HiccupJul_>tbh i probably would have heard of it if there was anything else
20:23:07<HiccupJul_>i'm just being extra thorough
20:23:09Wingy1139793760 (Wingy) joins
20:25:10<TheTechRobo>HiccupJul_: http://irc.redump.org/ is an Apache default page
20:25:28<HiccupJul_>never heard of that one before
20:26:17<HiccupJul_>although irc://irc.redump.org exists
20:26:24<HiccupJul_>just never tried it with http
20:27:16<TheTechRobo>i just saw it as a subdomain, not as http
20:27:19<TheTechRobo>then I tried with http
20:27:26<TheTechRobo>also http://fc84ebe9582e4be0bac53d5342870c67.redump.org/ exists but is also an apache default page
20:27:41<HiccupJul_>strange
20:28:30<ThreeHM>BTW, I'm running wiki.redump.org through wikiteam for completeness
20:28:50<TheTechRobo>(finding this stuff through subdomain finders)
20:29:04<TheTechRobo>HiccupJul_: also, any invite links to Discord servers so I can archive them? :-)
20:30:36<HiccupJul_>yeah there's a few wikiteam dumps already
20:30:48<HiccupJul_>two on archive.org are mine, but haven't done one for a while so thanks
20:31:05<HiccupJul_>VGPC = Video Game Preservation Collective
20:31:13<HiccupJul_>that's the discord
20:31:39<HiccupJul_>i would give you an invite link, but you might not get in if you say you are there to scrape it
20:31:52<HiccupJul_>but i'll probably scrape it myself tbh
20:32:47<HiccupJul_>what would you recommend for scraping?
20:33:07<spirit>dont scrape with a discord account thats important to you
20:33:11<HiccupJul_>yeah i won't
20:33:16<HiccupJul_>i make throwaways for it
20:33:22<HiccupJul_>you can find the invite by googling anyway if you want to try
20:34:13<HiccupJul_>if it puts you at ease, we don't use the discord like a wiki, instead we use a website run by negligent admin ;)
20:35:13<spirit>:D
20:35:20<spirit>as a negligent admin myself i can relate
20:36:18<HiccupJul_>i'm sure you aren't as bad
20:36:26<HiccupJul_>the fact that you admit it shows your aren't as bad lol
20:36:35Wingy1139793760 quits [Read error: Connection reset by peer]
20:37:31Wingy1139793760 (Wingy) joins
20:42:58<ThreeHM>Oh fun, my wikiteam upload got detected as malware and deleted again (╯°□°)╯︵ ┻━┻
20:44:22<ThreeHM>Now I get to upload it compressed with zip instead of 7z to avoid the false positive, only like 50x the file size this way...
20:44:47<thuban>zst?
20:46:39<HiccupJul_>ah
20:46:45<HiccupJul_>ThreeHM: email IA
20:46:50<HiccupJul_>i got mine restored like that
20:47:23<ThreeHM>thuban: Good idea, that seems to go undetected and it's only double the file size compared to 7z
20:47:49<HiccupJul_>i think even a single detection as anything in virustotal triggers it
20:48:10<HiccupJul_>i think these big zips get detected falsely as zip bombs
20:49:39<HiccupJul_>but you already know that i'm sure
20:49:43<ThreeHM>Yeah, it's a bit paranoid on IA's side IMO. Especially considering how awful most (if not all) antivirus products are
20:50:09<HiccupJul_>yeah it is, luckily they seem to be okay with un-hiding the item if you ask
20:50:51<HiccupJul_>i wonder if there's a virus collection on archive.org...
20:51:18Wingy1139793760 quits [Read error: Connection reset by peer]
20:52:04<HiccupJul_>plus i think its valid to keep viruses, if that data is part of a CD-ROM or site or something
20:52:12Wingy1139793760 (Wingy) joins
20:52:38<HiccupJul_>within reason, so probably only old viruses
20:59:39HiccupJul_ quits [Client Quit]
21:03:57HiccupJul1 (HiccupJul) joins
21:04:12<HiccupJul1>now using thelounge
21:04:43<HiccupJul1>HiccupJul will eventually auto-disconnect i guess
21:06:19Wingy1139793760 quits [Read error: Connection reset by peer]
21:07:14Wingy1139793760 (Wingy) joins
21:12:46<TheTechRobo>HiccupJul1: I won't say I'm scraping :-)
21:12:55<TheTechRobo>HiccupJul1: The scraping tool I use is https://github.com/Sanqui/discard2
21:13:08<TheTechRobo>plus https://github.com/TheTechRobo/discord-urls-extractor for extracting urls
21:14:53<TheTechRobo>#discard for more discussion
21:16:52tech_exorcist_ quits [Remote host closed the connection]
21:17:36<thuban>i now think i was wrong earlier, but the wpull code is too abstracted for me to be certain.
21:17:41tech_exorcist (tech_exorcist) joins
21:18:11<TheTechRobo>thuban: what do you mean by abstracted?
21:19:14katocala quits [Read error: Connection reset by peer]
21:20:02<thuban>sessions, factories, delegation, etc etc
21:20:28katocala joins
21:21:27<thuban>(i spent at least five minutes just trying to figure out what called the span-hosts filter in the first place; turned out it was in URLFiltersPostURLImportSetupTask instead of URLFiltersSetupTask)
21:27:33DogsRNice (Webuser299) joins
21:41:37tech_exorcist quits [Remote host closed the connection]
21:41:57tech_exorcist (tech_exorcist) joins
21:42:14Wingy1139793760 quits [Ping timeout: 265 seconds]
21:42:50<thuban>JAA: wpull (and therefore grab-site/archivebot) considers foo.example.com to be a _different_ hostname from example.com and therefore does _not_ by default recurse into it from a starting url on example.com (except for page requisites and single links as are normally permitted offsite),
21:43:21<thuban>_and_ although `--domains example.com` will whitelist foo.example.com (and indeed all other subdomains), it still won't be recursed into without also setting `--span-hosts`, which is incompatible with `--span-hosts-allow`, so the normal behavior of permitting offsite page requisites and single links will be disabled by `--domain`,
21:43:32<thuban>_so_ the best way of getting both example.com and foo.example.com recursively is just to supply them both as starting urls in the same crawl.
21:43:35<thuban>is that correct?
21:56:12<thuban>s/`--domain`/`--domains`/
21:58:02<@JAA>thuban: The first part sounds correct, yeah. The conclusion is not though. If you do that, any link from example.com to foo.example.com or vice-versa will be considered off-site, and it won't recurse further on those URLs, so you may miss large swaths of both sites. I think your desired behaviour is only possible with a custom accept_url hook.
21:59:10<@JAA>(On that note, there's no special-casing of 'www' either; example.com and www.example.com are different domains as well.)
22:04:01<thuban>huh, ok. i'm a bit confused, though:
22:08:42<thuban>the SpanHostFilter is constructed (https://github.com/ArchiveTeam/wpull/blob/develop/wpull/application/tasks/rule.py#L93) with multiple hostnames from the initial URLTable (https://github.com/ArchiveTeam/wpull/blob/develop/wpull/database/sqltable.py#L226), and its test() method checks whether the url's hostname matches _any_ of them
22:08:44<thuban>(https://github.com/ArchiveTeam/wpull/blob/develop/wpull/urlfilter.py#L239).
22:08:58<@JAA>Look at the ParentFilter.
22:09:24<@JAA>See also: https://github.com/ArchiveTeam/wpull/issues/373
22:09:47<@JAA>Er no, not ParentFilter.
22:09:55<thuban>yeah, i was about to say
22:10:14<@JAA>It is the SpanHostFilter, yes.
22:10:17<thuban>^ what is that query actually getting, if not hostnames from multiple start points? (why can it return multiple hostnames at all?)
22:12:47DiscantX joins
22:13:32<thuban>((the obvious thing to do here is just run it and _see_ what it does, but i don't happen to have a random subdomain handy...))
22:14:12<@JAA>Yeah, it should return all hostnames of the initial URLs.
22:14:17<@JAA>all unique hostnames*
22:14:46<@JAA>So actually, it might work the way you want. I thought it didn't, but maybe that was something else.
22:15:23<@JAA>I think I observed such things with AB; perhaps that's something in the AB plugin rather than wpull.
22:17:23<thuban>_huh_
22:18:17<HiccupJul1>sounds like i need to find a small site with a subdomain and try it out...
22:18:19<@JAA>I'm not sure you can combine it with --domains though. You'd need every relevant domain in the initial URLs.
22:18:36<@JAA>Else SpanHostsFilter would still get hungery and eat those.
22:21:56Ruk8 joins
22:23:34<thuban>not sure what you mean
22:24:08<Ruk8>Hello everyone! Sorry if this is not the right place to discuss this topic, I'm an Italian wanna-be archivist and I'ts my first time here.
22:24:44<thuban>hello Ruk8, what's on your mind?
22:24:50<@JAA>thuban: Starting a crawl from https://example.org/ with `--domains example.org` without `--span-hosts-allow` will not let it recurse through subdomains because those do not appear in the initial URL hostnames and thus get kicked out by SpanHostsFilter.
22:25:37<@JAA>Er, with --span-hosts-allow I think. A bare --span-hosts would probably work (and also recurse all over the rest of the web).
22:26:00Jake (Jake) joins
22:26:41<thuban>JAA: yeah, that's what i said https://hackint.logs.kiska.pw/archiveteam-bs/20220725#c323963
22:27:00<@JAA>Yeah, I'm just confused and tired. Better don't listen to me right now. :-)
22:27:37<Ruk8>Recently, I took an archive of all the public Adobe downloads (yeah, they still exist, mainly for enterprise costumers, not the CC the be clear) that are not present and cannot be crawled by the the Internet'archive's savepage now service. The archive is approximately 60 GB and I'm wondering how can I upload that to the internet archive and, more
22:27:38<Ruk8>importantly, making the archive indexable by the wayback machine.
22:27:58<Ruk8>(60 GB uncompressed)
22:28:12<Ruk8>I read you need to be whitelisted to do that
22:28:29<@JAA>thuban: A crude workaround would be to use --span-hosts with --accept-regex like ^https?://([^/]*\.)?example\.org(:\d+)?/ to emulate --domains and spanning only within the domain.
22:28:57<thuban>JAA: lol. get some sleep
22:29:29<@JAA>I will, as soon as the room cools down enough to be able to. ._.
22:29:57<thuban>(the regex is both amusing and mildly evil, but still wouldn't grab 'offsite' page reqs)
22:31:34<@JAA>Correct. I think that behaviour can only be achieved with an accept_url hook which would effectively reimplement wpull's filters in a different way.
22:32:16<@JAA>Oh yeah, the regex also won't catch URLs with an auth part, so ehh. :-)
22:33:42<@JAA>^https?://([^/]*[@.])?example\.org(:\d+)?/
22:34:16<thuban>if by "that behavior" you mean recursing into subdomains, i think my original suggestion of supplying both domains as start points was correct! in fact in #373 it's exactly the behavior you're complaining of.
22:35:11<@JAA>'That behaviour' = start from only example.org without specifying all subdomains (because you might not even know all of them?) but recursing through them while also retrieving page requisites from other domains.
22:36:23<@JAA>(Or start from a subset of all subdomains and following/recursing based on links or whatever. Anything that isn't a complete list of the subdomains.)
22:36:37<thuban>oic. yes, you're quite right about that.
22:36:52<HiccupJul1>for my use case, would it be okay to use --input-file and in that file, put redump.org, forum.redump and wiki.redump.org
22:37:03<HiccupJul1>(assuming there are no other subdomains)
22:37:07<HiccupJul1>?
22:37:39<HiccupJul1>i guess that would work
22:37:50<HiccupJul1>but if it encountered secret.redump.org, it wouldn't traverse far into that
22:37:53lennier1 quits [Quit: Going offline, see ya! (www.adiirc.com)]
22:38:15<Jake>Ruk8: typically only very specific users get their WARCs into the Wayback Machine. Do you have a list of URLs we could archive through #archivebot maybe?
22:38:28<Ruk8>Sure
22:38:38lennier1 (lennier1) joins
22:39:15<Ruk8>Can I DM u the list?
22:39:33tech_exorcist quits [Client Quit]
22:40:20<thuban>Ruk8: anything that goes through archivebot will end up public.
22:40:45<@JAA>HiccupJul1: If you use --span-hosts-allow with that, yes, I think that's right.
22:40:54<Ruk8>Yeah, I asked that only to not flood the chat xD
22:40:59<thuban>if you're just concerned about the file transfer, may i suggest transfer.archivete.am?
22:46:25<Ruk8>Done, here the url list: https://transfer.archivete.am/cVM94/adobe_framemaker_archive_urls.txt
22:49:28<HiccupJul1>JAA: why would you need --span-hosts-allow ?
22:50:01<@JAA>Ruk8: Thanks, I've thrown it into ArchiveBot.
22:50:46<@JAA>HiccupJul1: To enable the SpanHostsFilter, which allows recursing across sites. You specifically want the -allow one because I assume you don't want to crawl the entire internet.
22:51:38<thuban>HiccupJul1: this is not necessary for your use case; it is automatically set by grab-site.
22:52:07<@JAA>Ruk8: You can monitor the progress on http://dashboard.at.ninjawedding.org/ (job dnszlwttejc3iypadr3o0rdxi). When it finishes, it will show up in the Wayback Machine usually within a couple days.
22:54:09<@JAA>HiccupJul1: Sorry, didn't read the entire backlog, only thuban's question. Yeah, grab-site sets that already for you.
22:56:24<HiccupJul1>ah
22:56:38<HiccupJul1>okay so just grab-site pointed at a list with each (sub)domain is fine? great
22:56:50<HiccupJul1>i'll set that running soon tonight
22:57:47<@JAA>Probably hopefully maybe :-)
22:58:01<HiccupJul1>well, i'll be sure to explore the site once its done
22:58:09<HiccupJul1>explore the mirror
22:58:42<Ruk8>Thanks everyone for the support
22:59:39<thuban>HiccupJul1: you may or may not also wish (after consulting your conscience with regard to the login wall) to upload the result to archive.org--not everything has to be in the wbm to be useful
22:59:48<thuban>Ruk8: you're very welcome!
23:00:00<Ruk8>I'm happy to have given my contribution
23:04:13<TheTechRobo>Ruk8: thanks :-)
23:09:49Ruk8 quits [Remote host closed the connection]
23:09:50<HiccupJul1>ah yeah i will be uploading to IA
23:10:26<HiccupJul1>and there are plans for a new site if redump goes kaput
23:13:10<HiccupJul1>(by redump users)
23:28:11nikow1 joins
23:28:30DiscantX quits [Remote host closed the connection]
23:28:59user_ (gazorpazorp) joins
23:29:03Matthww7 joins
23:29:40Atom quits [Read error: Connection reset by peer]
23:30:41AnotherIki quits [Ping timeout: 240 seconds]
23:30:41@Sanqui quits [Ping timeout: 240 seconds]
23:31:11Frogging101 quits [Ping timeout: 240 seconds]
23:31:11girst quits [Ping timeout: 240 seconds]
23:31:41DogsRNice quits [Ping timeout: 240 seconds]
23:31:41ats_ quits [Ping timeout: 240 seconds]
23:31:41@rewby quits [Ping timeout: 240 seconds]
23:32:11drexler quits [Ping timeout: 240 seconds]
23:32:11FalconK quits [Ping timeout: 240 seconds]
23:32:41ThreeHM quits [Ping timeout: 240 seconds]
23:33:11cronfox quits [Ping timeout: 240 seconds]
23:33:11@HCross quits [Ping timeout: 600 seconds]
23:33:41@Fusl_ quits [Ping timeout: 600 seconds]
23:33:41phuz-test quits [Remote host closed the connection]
23:34:11Gaelan quits [Ping timeout: 240 seconds]
23:35:11celestial quits [Ping timeout: 240 seconds]
23:36:11justcool393 quits [Ping timeout: 600 seconds]
23:37:11@jrwr quits [Ping timeout: 600 seconds]
23:38:39yano1 (yano) joins
23:38:41cptcobalt_ joins
23:38:51masterX244_ (masterX244) joins
23:38:57Craigle8 (Craigle) joins
23:39:17@Fusl quits [Excess Flood]
23:39:17sepro quits [Read error: Connection reset by peer]
23:39:17nikow quits [Read error: Connection reset by peer]
23:39:17jmtd quits [Ping timeout: 246 seconds]
23:39:17Ruthalas3 joins
23:39:26tzt quits [*.net *.split]
23:39:26shreyasminocha quits [*.net *.split]
23:39:26jamesp quits [*.net *.split]
23:39:26thehedgeh0g quits [*.net *.split]
23:39:26evan quits [*.net *.split]
23:39:26spirit quits [*.net *.split]
23:39:27George quits [*.net *.split]
23:39:27gazorpazorp quits [*.net *.split]
23:39:27jodizzle quits [*.net *.split]
23:39:27qwertyasdfuiopghjkl quits [*.net *.split]
23:39:27Ruthalas quits [*.net *.split]
23:39:27dm4v quits [*.net *.split]
23:39:27Matthww quits [*.net *.split]
23:39:27xkey quits [*.net *.split]
23:39:27jspiros quits [*.net *.split]
23:39:27thuban quits [*.net *.split]
23:39:27Barto quits [*.net *.split]
23:39:27systwi quits [*.net *.split]
23:39:27Lord_Nightmare quits [*.net *.split]
23:39:27jacobk quits [*.net *.split]
23:39:28duce1337 quits [*.net *.split]
23:39:28jonty quits [*.net *.split]
23:39:28JSharp quits [*.net *.split]
23:39:28Ctrl-S quits [*.net *.split]
23:39:28Matthww7 is now known as Matthww
23:42:58Ruthalas3 quits [*.net *.split]
23:42:58katocala quits [*.net *.split]
23:42:58qw3rty quits [*.net *.split]
23:42:58wickedplayer494 quits [*.net *.split]
23:42:58igloo22225 quits [*.net *.split]
23:42:58Mateon1 quits [*.net *.split]
23:42:58nothere quits [*.net *.split]
23:42:58knecht420 quits [*.net *.split]
23:42:58michaelblob quits [*.net *.split]
23:42:59Stiletto quits [*.net *.split]
23:42:59Kinille quits [*.net *.split]
23:42:59masterX244 quits [*.net *.split]
23:42:59Craigle quits [*.net *.split]
23:42:59Mayk78 quits [*.net *.split]
23:42:59Larsenv quits [*.net *.split]
23:42:59cpina quits [*.net *.split]
23:42:59kiska quits [*.net *.split]
23:42:59endrift quits [*.net *.split]
23:42:59Doranwen quits [*.net *.split]
23:42:59@hook54321 quits [*.net *.split]
23:42:59mjh_ quits [*.net *.split]
23:42:59h2ibot quits [*.net *.split]
23:42:59tech234a quits [*.net *.split]
23:43:00NotEggplant quits [*.net *.split]
23:43:00sembiance quits [*.net *.split]
23:43:00maxfan8 quits [*.net *.split]
23:43:00betamax_ quits [*.net *.split]
23:43:00pekster` quits [*.net *.split]
23:43:00@AlsoJAA_ quits [*.net *.split]
23:43:00fionera quits [*.net *.split]
23:43:00Soul_ quits [*.net *.split]
23:43:00Deewiant quits [*.net *.split]
23:43:00kiskaLogBot quits [*.net *.split]
23:43:00T31M quits [*.net *.split]
23:43:00@arkiver quits [*.net *.split]
23:43:00AK quits [*.net *.split]
23:43:00yano quits [*.net *.split]
23:43:00@chfoo quits [*.net *.split]
23:43:01@OrIdow6 quits [*.net *.split]
23:43:01mr_archive2 quits [*.net *.split]
23:43:01wessel1512 quits [*.net *.split]
23:43:01marto_ quits [*.net *.split]
23:43:01fuzzy8021 quits [*.net *.split]
23:43:01colona quits [*.net *.split]
23:43:01devsnek quits [*.net *.split]
23:43:01asie2 quits [*.net *.split]
23:43:01nimaje quits [*.net *.split]
23:43:01mgrandi quits [*.net *.split]
23:43:01Jonimoose quits [*.net *.split]
23:43:01cptcobalt quits [*.net *.split]
23:43:01eythian quits [*.net *.split]
23:43:01murmur quits [*.net *.split]
23:43:01Muad-Dib quits [*.net *.split]
23:43:01[42] quits [*.net *.split]
23:43:01Craigle8 is now known as Craigle
23:43:01cptcobalt_ is now known as cptcobalt
23:44:40Frogging101 joins
23:46:12cpina joins
23:46:15Barto (Barto) joins
23:46:16Larsenv (Larsenv) joins
23:46:16devsnek (devsnek) joins
23:46:34Sanqui joins
23:46:37dm4v joins
23:46:37Soulflare joins
23:46:37[42] joins
23:46:37girst joins
23:46:37fionera joins
23:46:37wickedplayer494 joins
23:46:37maxfan8_ joins
23:46:37ThreeHM joins
23:46:37BlueMaxima joins
23:46:37mr_archive2 joins
23:46:37Gaelan joins
23:46:37jacobk joins
23:46:37Muad-Dib joins
23:46:37nimaje joins
23:46:37Stiletto joins
23:46:37tzt joins
23:46:37Ctrl-S joins
23:46:37kiskaLogBot joins
23:46:37eythian joins
23:46:37spirit2 joins
23:46:37wessel1512 joins
23:46:37Atom-- joins
23:46:37Ruthalas joins
23:46:37katocala (katocala) joins
23:46:37nothere joins
23:46:37knecht420 (knecht420) joins
23:46:37endrift joins
23:46:37T31M (T31M) joins
23:46:37arkiver (arkiver) joins
23:46:38fuzzy8021 (fuzzy8021) joins
23:46:38guybrush.hackint.org sets mode: +o arkiver
23:46:39Sanqui quits [Changing host]
23:46:39Sanqui (Sanqui) joins
23:46:39@ChanServ sets mode: +o Sanqui
23:46:42thuban joins
23:46:46fionera quits [Signing in (fionera)]
23:46:46fionera (Fionera) joins
23:46:48fionera is now known as RJHacker61313
23:46:48RJHacker61313 quits [Client Quit]
23:47:02mjh_ joins
23:47:14systwi (systwi) joins
23:47:23hook54321 (hook54321) joins
23:47:23@ChanServ sets mode: +o hook54321
23:47:27Lord_Nightmare (Lord_Nightmare) joins
23:47:29fionera (Fionera) joins
23:47:34fionera quits [Client Quit]
23:47:45sepro (sepro) joins
23:47:47jonty (jonty) joins
23:47:49OrIdow6 (OrIdow6) joins
23:47:49@ChanServ sets mode: +o OrIdow6
23:47:59HackMii_ quits [Remote host closed the connection]
23:47:59mutantmonkey quits [Remote host closed the connection]
23:47:59sec^nd quits [Write error: Broken pipe]
23:48:07cronfox (Cronfox) joins
23:48:14Deewiant (Deewiant) joins
23:48:18chfoo (chfoo) joins
23:48:18@ChanServ sets mode: +o chfoo
23:48:40HackMii_ (hacktheplanet) joins
23:48:45asie2 joins
23:48:53colona (colona) joins
23:49:14ats (ats) joins
23:49:42tech234a (tech234a) joins
23:49:51phuzion (phuzion) joins
23:49:55murmur joins
23:50:35pekster (pekster) joins
23:51:20nikow1 quits [Remote host closed the connection]