00:31:24 | <h2ibot> | Pokechu22 edited List of website hosts (+28, /* F */ freeservers also has 8m.net): https://wiki.archiveteam.org/?diff=51495&oldid=51494 |
00:32:24 | <h2ibot> | Pokechu22 edited List of website hosts (+55, /* 0-9 */ 20m.com): https://wiki.archiveteam.org/?diff=51496&oldid=51495 |
00:38:50 | | riku quits [Ping timeout: 240 seconds] |
00:54:10 | | wickedplayer494 is now authenticated as wickedplayer494 |
01:22:03 | <fireonlive> | I think what is more surprising is that angel fire is still around |
02:09:57 | | Wohlstand (Wohlstand) joins |
02:15:28 | | BlueMaxima quits [Read error: Connection reset by peer] |
03:13:26 | | Wohlstand quits [Remote host closed the connection] |
03:17:57 | <h2ibot> | FireonLive edited YouTube (+166, extend IP warning to the wiki as well): https://wiki.archiveteam.org/?diff=51497&oldid=51413 |
03:18:00 | <fireonlive> | the wording on that sucks musty asshole, but i figured i'd add it before i forgot |
03:23:58 | <h2ibot> | FireonLive edited YouTube (-14, no more ChromeBot): https://wiki.archiveteam.org/?diff=51498&oldid=51497 |
03:28:50 | | nulldata quits [Ping timeout: 240 seconds] |
03:32:44 | <fireonlive> | (the whole page needs a good deep loving but i'm too tired) |
03:35:50 | | nic9070 quits [Ping timeout: 240 seconds] |
03:35:59 | | nic9070 (nic) joins |
03:37:29 | | nulldata (nulldata) joins |
03:59:58 | | Shjosan quits [Quit: Am sleepy (-, – )…zzzZZZ] |
04:00:35 | | Shjosan (Shjosan) joins |
04:40:47 | <Pedrosso> | I had asked about this in december but the end-of-year shutdowns had to come first. Should there be another DPoS of furaffinity? I don't doubt the site's stability but a lot of user content is still regularly deleted much like reddit, and the original grab was way back in 2015 |
05:03:53 | <fireonlive> | i believe there was general support for that last time; could even be a continuous thing :3 |
05:04:00 | <fireonlive> | cc arkiver too |
05:05:59 | <@arkiver> | hi |
05:06:32 | <@arkiver> | Pedrosso: can you make data deletion clear in some way? |
05:06:41 | <@arkiver> | for people not familiar to understand the significance/scale of it |
05:12:49 | | DogsRNice quits [Read error: Connection reset by peer] |
05:12:58 | <audrooku|m> | hey, I have a 145 small sites I would like to request be crawled, ideally some of them should have their domain crawled, and some of them I would like to crawl all children of the given path (recursively but only matching the parent path, usually because it is a subdomain or specific blog on a site), is this possible? |
05:14:13 | <project10> | JAA: do you know the bloom filter properties (capacity, error rate, bits per hash) for the tracker bloom implementation? |
05:19:25 | <thuban> | audrooku|m: it's possible. why should they be archived? also, are any of the sites likely to link to one another? |
05:19:55 | <@arkiver> | project10: 1/1000000 false positive rate |
05:20:15 | <@arkiver> | or well, that is the maximum false positive rate |
05:20:36 | <@arkiver> | the bloom filter expands to keep a maximum of 1/1000000 (1 over a million) false positive rate |
05:20:38 | <project10> | that's useful to know! I think that in combination with the (maximum?) set size defines bits_per_hash |
05:20:41 | <@arkiver> | expand here means double in size |
05:21:01 | <project10> | oh, it's a Scalable Bloom Filter? |
05:21:04 | <@arkiver> | yes |
05:21:29 | <project10> | thanks arkiver! very helpful |
05:21:32 | <@arkiver> | :) |
05:21:41 | <@arkiver> | it's been working pretty well for the projects |
05:21:50 | <@arkiver> | but it does grow and grow... and grow |
05:23:13 | <project10> | now I'm halfway curious as to the size of the 69.5B #// bloom filter :) |
05:23:30 | <audrooku|m> | thuban: thanks for the response, I will submit a proper request soon with that information (including requested justification), and it's possible that they will link to each other, yes |
05:23:44 | <audrooku|m> | very likely |
05:23:54 | <Pedrosso> | arkiver: It's hard to approximate since FA gives an HTTP code of 200 even on missing items. However, they use a simple enumeration system https://www.furaffinity.net/view/* and going through that one can find that a lot of posts are missing. I can't give a complete estimate though. I'm suggesting the DPoS because I have found links I've sent to |
05:23:55 | <Pedrosso> | people stop working as content is deleted, although if only a portion |
05:27:09 | <thuban> | audrooku|m: ah, that makes it a bit more complicated--they would have to be in separate archivebot jobs |
05:27:17 | <thuban> | (so either someone manually feeds in all 145, or an op sets up a queue) |
05:30:10 | <@OrIdow6^2> | On Furaffinity, range samples like that may be complicated if there are spam removals in there |
05:30:21 | <Pedrosso> | That's true, |
05:32:40 | <Pedrosso> | So it's difficult to get an approximation there |
05:33:43 | <Pedrosso> | There are a lot of niche topics on FA, making the deletion of one artist's work possibly lead to an entire topic (or at least the majority of it) being nuked, notably unique characters and settings that aren't explored in many other media. An example would be the niche topic of multis (characters with more (usually)limbs than a human & how those |
05:33:43 | <Pedrosso> | creatures would live & interact) which itself has many tiny interesting subgroups |
05:37:24 | <h2ibot> | OrIdow6 edited FurAffinity (+81, /* Archives */ furarchiver.net): https://wiki.archiveteam.org/?diff=51499&oldid=49859 |
05:38:24 | | eythian quits [Quit: http://quassel-irc.org - Chat comfortabel. Waar dan ook.] |
05:38:29 | <nicolas17> | arkiver: are your pings working now? :P |
05:38:39 | <fireonlive> | seems to be |
05:38:49 | | eythian joins |
05:38:52 | <@arkiver> | nicolas17: they are working |
05:38:59 | <@arkiver> | nicolas17: you mean on what Pedrosso wrote? |
05:40:14 | <nicolas17> | (a few days ago you said they weren't, and I pinged you on something else and got no response, but I don't know your uhh effective timezone) |
05:40:35 | <@arkiver> | nicolas17: did i miss something? |
05:40:40 | <@arkiver> | i believe they are working |
05:41:54 | <nicolas17> | https://opensource.samsung.com/uploadSearch?searchValue=- I'd like to archive this |
05:42:12 | <@arkiver> | nicolas17: oh yeah the 800 GB of open source data rigt? |
05:42:14 | <@arkiver> | right* |
05:42:14 | <nicolas17> | I don't think it can go to WBM in any way because requests use POST and a one-time token, so it will have to be an IA item |
05:42:50 | <@arkiver> | i believe you estimated 5k files, the 800 GB could go into a single item, but not sure if that is the nicest thing to do |
05:43:17 | <@arkiver> | 2448 results i see |
05:43:25 | <@arkiver> | nicolas17: shall we do one item for each result? |
05:44:02 | <nicolas17> | "-" is not a comprehensive search string :) |
05:44:23 | <@arkiver> | do i see a 27 MB NOTICE.html file? :P |
05:44:29 | <nicolas17> | yes |
05:44:47 | <@arkiver> | nicolas17: if we do one item for each result, how many items would we end up with roughly? |
05:45:04 | <nicolas17> | I used sequential numbers on eg. https://opensource.samsung.com/downAnnMPop?uploadId=11931 to get number and size of files |
05:45:34 | <nicolas17> | and it seemed there's ~2500 |
05:46:53 | <@arkiver> | shall we do one item per result? |
05:47:03 | <nicolas17> | yeah I think that would work |
05:47:13 | <fireonlive> | would probably be cleanest |
05:47:14 | <nicolas17> | but then each item needs a decent description |
05:47:41 | <nicolas17> | I'm not currently looking at search results at all |
05:47:43 | <@arkiver> | nicolas17: i'd do a link to where you go it from, combines with any descriptive information you can find |
05:48:11 | <@arkiver> | there doesn't seem to be a whole lot of descriptive information |
05:48:23 | <nicolas17> | but I'll have to, as I think the device model and all that is only on the search result row, not in downSrcMPop/downAnnMPop or in the files themselves |
05:48:29 | <@arkiver> | the "announcement" is just the multi-MB NOTICE.html |
05:48:48 | | Ruthalas59 (Ruthalas) joins |
05:50:08 | <nicolas17> | I downloaded *all* the NOTICEs |
05:50:30 | <@arkiver> | is there information in those perhaps for what you look for? |
05:51:04 | <nicolas17> | and "tar | zstd -19" compresses them to like 1% |
05:51:11 | <@arkiver> | nice |
05:51:44 | <nicolas17> | 1. it's text, 2. there's redundant data in each file, 3. there's significant redundant data *across* files |
05:52:15 | <nicolas17> | if most or all the html files have an entire copy of the GPL... yeah :P |
05:52:51 | <@arkiver> | i guess each items would have the corresponding NOTICE.html and the files belonging to the ID? |
05:53:29 | <@arkiver> | you don't have to compress the NOTICE.html files for upload to IA - the total size is still not shockingly huge, and not compressing will make possible use of it easier |
05:53:42 | <@arkiver> | use directly on IA for example - by loading the NOTICE.html |
05:53:42 | <nicolas17> | yeah, and there's a few with multiple source files or multiple ann files |
05:53:51 | <@arkiver> | yeah in that case the item would have multiple |
05:53:59 | <@arkiver> | Pedrosso: looking into it now |
05:55:25 | <@arkiver> | Pedrosso: i read 18+ content is only available with login |
05:55:48 | <@arkiver> | is most of the continuously deleted data behind a login wall? |
05:55:48 | <nicolas17> | compressing each individual NOTICE would not give 99% savings anyway (that's only when compressing the whole tar at once) |
05:57:06 | <nicolas17> | so yeah, compression is not worth the annoyance for use |
05:59:31 | <nicolas17> | funny thing, with all the problems of IA upload sometimes being slow, plus my Internet connection (like most residential ISPs) having much faster download than upload... it will *still* be slower to download from samsung than to upload to IA :D |
05:59:41 | <@arkiver> | ouch :P |
05:59:53 | <nicolas17> | I don't know if it's awful routing or intentional throttling but I get ~200KB/s |
06:00:09 | <nicolas17> | parallelism helps |
06:01:33 | <DigitalDragons> | arkiver: yes, 18+ content needs an account |
06:02:09 | | Ruthalas59 quits [Client Quit] |
06:02:54 | <DigitalDragons> | it will be silently hidden from everywhere if you aren't logged in, or present an error if you try to view a direct link to nsfw content |
06:03:08 | | _Dango360 quits [Read error: Connection reset by peer] |
06:03:42 | | Ruthalas59 (Ruthalas) joins |
06:07:50 | <nicolas17> | okay will bikeshed specifics tomorrow |
06:10:40 | | Ruthalas59 quits [Client Quit] |
06:14:03 | <@OrIdow6^2> | Also some stuff is gated behind a login but not 18+, just becasue the user elects to make it so |
06:26:19 | <@OrIdow6^2> | I would like at one point to gather the links from there and similar sites that have uploads but don't support all types, lots of stuff in Dropbox/Google Drive/similar |
06:27:04 | <@OrIdow6^2> | If there does end up being a proactive ongoing project could be good for that, Dropbox in particular seems to have changed their URL format a few years ago and broken a bunch of downloads |
06:27:06 | <fireonlive> | ooh good idea; should start collecting google drive/dropbox links too |
06:27:07 | | Island quits [Read error: Connection reset by peer] |
06:28:06 | <@OrIdow6^2> | The dropbox links I encounter can be nicely saved by changing the GET parameter "dl" from "0" to "1", but sadly the domain or relevant path prefix is excluded from the WBM |
06:28:32 | <Pedrosso> | arkiver: "is most of the continuously deleted data behind a login wall?" I don't know about 18+ content, I don't view such. But I know that some users' content is only availible logged-in regardless of its sfw/nsfw status. |
06:29:24 | <Pedrosso> | I also know the old 2015 grab included 18+ content |
06:32:13 | <Pedrosso> | I'd assume that the majority of content which is not 18+ is publically accessible |
06:32:36 | <fireonlive> | >sadly the domain or relevant path prefix is excluded from the WBM |
06:32:37 | <fireonlive> | :( |
06:35:20 | <Barto> | fireonlive: video playback seems to be broken in nitter, a fix was upstreamed, i'm gonna deploy https://github.com/zedeus/nitter/commit/52db03b73ad5f83f67c83ab197ae3b20a2523d39 shortly |
06:38:40 | | IKFKconnection joins |
06:41:16 | | IKFKconnection quits [Remote host closed the connection] |
06:59:45 | | Ruthalas59 (Ruthalas) joins |
07:03:41 | <Barto> | fireonlive: deployed |
07:08:36 | | Ruthalas59 quits [Client Quit] |
07:15:13 | | Ruthalas59 (Ruthalas) joins |
07:18:54 | <fireonlive> | Barto: :D thanks |
07:31:56 | | Gereon900 (Gereon) joins |
07:32:39 | | Gereon90 quits [Ping timeout: 272 seconds] |
07:32:39 | | Gereon900 is now known as Gereon90 |
07:40:15 | | Soulflare quits [Ping timeout: 272 seconds] |
07:42:59 | | Soulflare joins |
07:49:45 | | le0n quits [Ping timeout: 272 seconds] |
07:55:20 | | Soulflare quits [Ping timeout: 240 seconds] |
08:02:37 | | Soulflare joins |
08:08:05 | | Dango360 (Dango360) joins |
08:23:35 | | Barto quits [Read error: Connection reset by peer] |
08:33:47 | | Barto (Barto) joins |
09:17:10 | | le0n (le0n) joins |
09:21:50 | | le0n quits [Ping timeout: 240 seconds] |
09:33:58 | | le0n (le0n) joins |
10:00:00 | | Bleo18260 quits [Client Quit] |
10:01:18 | | Bleo18260 joins |
10:07:20 | | decky_e quits [Ping timeout: 240 seconds] |
10:07:27 | | ctag quits [Read error: Connection reset by peer] |
10:07:50 | | ctag (ctag) joins |
10:15:00 | | treora quits [Quit: blub blub.] |
10:16:43 | | treora joins |
10:30:35 | | decky_e joins |
10:43:53 | <audrooku|m> | RE: the request I mentioned to Thuban above... (full message at <https://matrix.hackint.org/_matrix/media/v3/download/hackint.org/IYZKqmhthLzUWYgXaqfYBeIf>) |
10:55:44 | <thuban> | audrooku|m: thanks! |
10:56:25 | <audrooku|m> | based on what I read on the archivebot wiki page the default behavior is what I desire for both url lists (only crawl pages that match the seed url) |
10:57:08 | <thuban> | correct |
10:59:15 | <thuban> | those aren't bad, actually; my understanding (and i just went and double-checked the code) is that the six -children items will need their own jobs, but -domains can be one big `!a <` (since you want the entirety of each site) |
11:02:22 | <audrooku|m> | is this something I need to do or an OP needs to do? |
11:02:27 | <thuban> | *seven |
11:07:05 | <thuban> | audrooku|m: only voiced users can submit jobs, and only ops can run `!a <` jobs or set up queues (not sure which one would be preferred here) |
11:07:51 | <audrooku|m> | Thuban: Alright thanks |
11:08:18 | <thuban> | someone will probably get to it in the next day or two; thanks for the suggestions :) |
12:11:15 | | IDK (IDK) joins |
12:16:58 | | qwertyasdfuiopghjkl quits [Remote host closed the connection] |
13:22:53 | | Arcorann quits [Ping timeout: 272 seconds] |
13:40:37 | | Naruyoko quits [Ping timeout: 272 seconds] |
13:50:57 | | riku (riku) joins |
13:57:52 | | flem joins |
13:58:04 | <betamax> | I am very much regretting not uploading my archive of 2022 US midterm campaign sites sooner.... |
13:58:08 | <betamax> | Pulled out the drive today to check something else on it and it no longer powers up, and there's damage to the PCB |
13:58:12 | <betamax> | it's.... not going to work again :( |
13:58:27 | <Pedrosso> | :( |
13:58:54 | <Pedrosso> | There's nothing that can be done? |
13:59:45 | | flem quits [Remote host closed the connection] |
13:59:47 | <nicolas17> | betamax: is it a magnetic hard disk? |
14:00:07 | <betamax> | yup, 1TB magnetic disk |
14:00:20 | <betamax> | I can see a chip out of the PCB, no idea how that happened |
14:00:29 | <nicolas17> | that's fixable |
14:00:47 | <nicolas17> | maybe only by professional data recovery companies but fixable |
14:01:01 | <betamax> | yeah, but at what cost |
14:01:19 | <betamax> | I'm going to label the HDD with what was on it and what is wrong with it, and put it into a box |
14:01:33 | <nicolas17> | also |
14:01:44 | <betamax> | then if someone in the future really wants to know what a campaign website looked like (and it's not on wayback) I can revisit it |
14:01:55 | <betamax> | (thankfully I have a full list of the sites that were on it) |
14:02:17 | <nicolas17> | if a disk head gets damaged, I would *want* professional data recovery to open it in a professional cleanroom to replace it (I have had to deal with that before) |
14:02:41 | <nicolas17> | but PCB damage is more accessible to DIY |
14:03:03 | <nicolas17> | not necessarily by yourself but like, hardware-nerd friends |
14:03:11 | <betamax> | it's not something I have the time or skills for now, but I'll keep the drive around just in case |
14:03:23 | <nicolas17> | yeah |
14:03:45 | <nicolas17> | it doesn't have to be here and now, I was just giving optimism to the "not going to work again" :) |
14:05:16 | <betamax> | thanks! |
14:05:30 | <betamax> | It'll go with the other dead HDD of lost material :\ |
14:11:59 | | Megame (Megame) joins |
15:30:49 | | inedia quits [Ping timeout: 272 seconds] |
15:56:58 | <nicolas17> | JAA: yes we know about Hobbes... but is anything being done about it? |
15:58:26 | <nicolas17> | oh there's a 18GB tar |
15:58:37 | <nicolas17> | (why didn't they use bittorrent...) |
16:08:55 | <@JAA> | nicolas17: According to Jason, there are also already multiple copies of Hobbes. |
16:13:24 | <fireonlive> | there was also an AB job started |
16:18:41 | | nexusxe (nexusxe) joins |
16:23:27 | | Doranwen quits [Remote host closed the connection] |
16:23:47 | | Doranwen (Doranwen) joins |
16:23:55 | | inedia (inedia) joins |
16:29:50 | | ymgve_ is now known as ymgve |
16:46:43 | | emberquill08 quits [Quit: The Lounge - https://thelounge.chat] |
16:47:32 | | emberquill080 (emberquill) joins |
16:52:45 | <h2ibot> | JustAnotherArchivist edited Deathwatch (+255, /* 2024 */ Add RuneScape forums): https://wiki.archiveteam.org/?diff=51500&oldid=51491 |
17:15:31 | | Wohlstand (Wohlstand) joins |
17:17:32 | | Wohlstand quits [Client Quit] |
17:28:18 | | HP_Archivist (HP_Archivist) joins |
18:09:34 | <Vokun> | Is the runescape forum more suitable for AB or DPPOS? |
18:11:50 | | c3manu (c3manu) joins |
18:24:27 | | ctag quits [Client Quit] |
18:32:10 | | ctag (ctag) joins |
18:33:51 | <betamax> | On the subject of broken drives (losing the drive earlier has made me realise I need to sort out my "old HDD pile" ASAP :D ) |
18:34:23 | <betamax> | I have a USB 1TB HDD that started giving me I/O errors. As soon as that happened I disconnected it and put it in the "deal with it later" pile |
18:34:48 | <fireonlive> | you have some bad luck! |
18:35:23 | <betamax> | that drive was *ancient*, I'm just bad at upgrading ("oh, no, this 12-year-old drive is fine for <critical thing>") |
18:35:38 | <fireonlive> | ah :) |
18:35:49 | <betamax> | Is the best way to try and recover it, to (1) connect it but not mount it, (2) use dd / ddrescue to make an image of it? |
18:36:08 | <betamax> | Or is there a better way? (excluding paying for recovery, which I don't want to do yet) |
18:38:25 | <@JAA> | Yeah, ddrescue is what I'd try, I think. |
18:38:40 | <@JAA> | Or retrieve it from the backups. ;-) |
18:43:40 | <fireonlive> | back-what? |
18:43:52 | <fireonlive> | and yeah ddrescue |
18:44:27 | <fireonlive> | i saw a post on reddit where someone had a HDD where they accidentally some data.. and used recovery software... to put the deleted data back onto the HDD where they deleted the data. |
18:44:30 | <fireonlive> | :| |
18:44:59 | <fireonlive> | not like 'ok i got what i can let's move it back' but 'let's find from and then write to the same drive' |
18:45:25 | <@JAA> | Yeah, great idea. |
18:46:13 | <fireonlive> | it's what data recovery companies recommend xP |
18:47:37 | <nicolas17> | /o\ |
18:48:14 | | qwertyasdfuiopghjkl (qwertyasdfuiopghjkl) joins |
18:52:48 | | nexusxe quits [Client Quit] |
19:13:36 | | Wohlstand (Wohlstand) joins |
19:27:51 | <Terbium> | re: RuneScape forums, another forum gets swallowed by Discord |
19:30:44 | <@JAA> | No no, Discord, Reddit and Twitter!!1! |
19:59:42 | | katia is now known as _ |
20:03:51 | | _ is now known as katia |
20:26:50 | | qwertyasdfuiopghjkl quits [Remote host closed the connection] |
20:34:24 | <nulldata> | Totally sad news everyone, GameStop is ending their NFT marketplace. https://nft.gamestop.com/ https://lounge.nulldata.foo/uploads/253d2886be251e91/image.png |
20:38:32 | <h2ibot> | Nulldata edited Deathwatch (+257, /* 2024 */ Added GameStop NFT Marketplace): https://wiki.archiveteam.org/?diff=51501&oldid=51500 |
20:40:20 | <lumidify> | betamax: (re: ddrescue) make sure to use the mapfile option (optional third argument) - that lets you pause the recovery or restart it later with other options. |
20:40:51 | <betamax> | lumidify: just started it now, and yup, am doing the mapfile (it was the first thing the man page said!) |
20:40:56 | <betamax> | Continuing my broken drives / backups questions: |
20:41:02 | <betamax> | I'm (finally!) getting round to building a "proper" data storage system (rather than just a pile of HDDs on a shelf) |
20:41:09 | <betamax> | Being a cheapskate, my thought is to get 3x used 8TB HDDs off ebay (but from business sellers with 1 year warranty) and create a Truenas mirrored setup |
20:41:15 | <betamax> | Main questions: (1) Is there any real danger in going with used drives if I have it in a 3 drive mirror? |
20:42:16 | <betamax> | (2) Is 3-drive mirror significantly better than 2 drive (there's a couple of posts online from people saying that in a 2 drive mirror, if one drive fails the strain on copying everything to the replacement drive can cause the remaining one to fail - really?!) |
20:43:22 | <betamax> | I initially thought I'd get 2x new drives but in my mind 3x used is better redundancy, and if it turns out I buy duds that fail quickly the worst its done is cause me extra expense - not put the data at risk |
20:43:32 | <audrooku|m> | Betamax: I don't think the seller's one year warranties are binding in any way, so youre gambling that they will honor it |
20:44:49 | <betamax> | fair, but I'm looking at business sellers with 99% feedback and 10k+ items sold, and only ones with 5+ years on ebay |
20:45:23 | <@JAA> | Yes, such cascading failures are a significant risk, especially if the drives you have are all of similar age or, worse, from the same batch. |
20:45:53 | <lumidify> | Just my 2c: I would never trust data that's only stored on two drives to be safe. Three copies are the bare minimum for important data (although two copies are sadly still better than what most people have). |
20:46:37 | <@JAA> | I do two copies for most data I'd like to keep, three copies for anything I really care about. |
20:46:43 | <betamax> | Yeah, 3 drives feels a lot safer to me than 2 |
20:47:05 | <betamax> | JAA: I assume then that I am still at risk of cascading failures if I buy 3x drives from the same seller then? |
20:47:17 | <betamax> | (given they've probably been pulled from the same system) |
20:47:21 | <@JAA> | Potentially, yeah. |
20:47:36 | <@JAA> | Personally, I mix HDD manufacturers, too. Or at least models. |
20:48:10 | <@JAA> | So, if I have the choice, one copy on WD drives and one copy on Seagate drives. |
20:48:38 | | Wohlstand quits [Client Quit] |
20:48:38 | <@JAA> | No experience with Toshiba, but I'll consider them on the next purchase. |
20:48:54 | <betamax> | The seller has a mix of manufacturers, I was planning on asking them to provide a mix if possible |
20:49:16 | <betamax> | but I guess cascading failures are still a risk if the drives came from the same NAS / CCTV setup / etc. |
20:50:17 | <betamax> | The obvious fix is to buy new, I may compromise that by buying used from two separate sellers (and having 3x drives) |
20:54:30 | | DogsRNice joins |
20:54:55 | | Wohlstand (Wohlstand) joins |
20:54:57 | | Wohlstand quits [Client Quit] |
20:55:27 | | qwertyasdfuiopghjkl (qwertyasdfuiopghjkl) joins |
20:56:20 | | inedia quits [Ping timeout: 240 seconds] |
20:57:46 | | imer quits [Killed (NickServ (GHOST command used by imer9))] |
20:57:53 | | imer (imer) joins |
21:00:47 | | Megame quits [Ping timeout: 272 seconds] |
21:00:52 | | inedia (inedia) joins |
21:03:06 | | Megame (Megame) joins |
21:03:15 | | Wohlstand (Wohlstand) joins |
21:15:30 | | sarge (sarge) joins |
21:17:16 | | lexikiq joins |
21:19:09 | | mr_sarge quits [Ping timeout: 272 seconds] |
21:51:42 | | Wohlstand quits [Client Quit] |
21:51:47 | | BlueMaxima joins |
21:52:46 | <h2ibot> | Blankie edited List of websites excluded from the Wayback Machine (+28, Add https://pendantaudio.com/): https://wiki.archiveteam.org/?diff=51502&oldid=51490 |
21:58:20 | | Ruthalas59 quits [Ping timeout: 240 seconds] |
22:00:47 | <h2ibot> | JAABot edited List of websites excluded from the Wayback Machine (+0): https://wiki.archiveteam.org/?diff=51503&oldid=51502 |
22:43:23 | | itachi1706 quits [Ping timeout: 272 seconds] |
22:52:06 | | itachi1706 (itachi1706) joins |
22:55:59 | <h2ibot> | Nulldata edited Deathwatch (+241, /* Pining for the Fjords (Dying) */ Added…): https://wiki.archiveteam.org/?diff=51504&oldid=51501 |
22:56:59 | <h2ibot> | Nulldata edited Deathwatch (-3, /* 2024 */ Correct Artifact URL): https://wiki.archiveteam.org/?diff=51505&oldid=51504 |
23:00:40 | | Ruthalas59 (Ruthalas) joins |
23:02:15 | | Ruthalas59 quits [Client Quit] |
23:02:46 | | Ruthalas59 (Ruthalas) joins |
23:04:55 | | lunik173 quits [Ping timeout: 272 seconds] |
23:09:37 | | lunik173 joins |
23:11:05 | <nulldata> | Doesn't appear to be a way into Artifact from a browser - all previous links just redirect to the notice. The app still works, however, at the moment I can't find any of the 'social' features. |
23:12:04 | | c3manu quits [Remote host closed the connection] |
23:18:55 | <nulldata> | The AI summary function still works... |
23:19:27 | <nulldata> | https://lounge.nulldata.foo/uploads/5a5fb739f5ad1eeb/Screenshot_20240112-181702.png https://lounge.nulldata.foo/uploads/10ff36d7572f6e9e/Screenshot_20240112-181716.png https://lounge.nulldata.foo/uploads/1d43b8c9a73910d9/Screenshot_20240112-181740.png |
23:41:00 | <nulldata> | Can someone please throw https://mosaic.co/ into AB? See -ot assets were bought and all employees let go |
23:50:55 | | Arcorann (Arcorann) joins |
23:57:01 | | Megame quits [Client Quit] |