03:28:27TastyWiener95 (TastyWiener95) joins
06:21:04nothere quits [Quit: Leaving]
06:49:00nothere joins
06:49:01nothere quits [Max SendQ exceeded]
06:50:31nothere joins
07:26:33TastyWiener95 quits [Client Quit]
07:27:07TastyWiener95 (TastyWiener95) joins
07:55:36<@rewby|backup>The IA is over 100 PiB iirc. Assuming 1M people you still need each to seed 200G. I consider 1M people to be unreasonable. Even our best and most popular projects have <5k workers. So let's assume you magically outperform them and get 10k users. Now everyone has to store 12T. Consider that you want to store everything twice because drives will fail, people will lose interest and unplug their rig. They will corrupt stuff.
07:56:52<@rewby|backup>Im not saying it's technically impossible
07:57:05<@rewby|backup>I just don't think we can hit the scale required
07:59:10<@rewby|backup>I find it much more likely that we could get a govt grant or something and: Get 90 bay server chassis. Shove them full of 22 TB drives. After redundancy that still gives you 1.5PiB per machine. That means you need like 70 of them. That's about 7 DC racks. Make it 8 so you can have somewhere for your network and management kit.
07:59:25<@rewby|backup>That economy of scale will really work a lot better I think
08:21:18<@OrIdow6>If millions of people are donating a slice of their drives to AT, why not donate a slice of their time to cleaning trash off the roads? Or a bit of their money to fixing cracks in sidewalks? "There are billions of people on the Earth, if only a fraction could give a bit of their resources..." only works if there is only one place they can give them to
09:16:06TastyWiener95 quits [Client Quit]
09:16:36TastyWiener95 (TastyWiener95) joins
09:59:03driib quits [Client Quit]
09:59:33driib (driib) joins
11:49:08nulldata quits [Quit: The Lounge - https://thelounge.chat]
11:49:47nulldata (nulldata) joins
12:50:58gfhh2 quits [Read error: Connection reset by peer]
13:04:25gfhh joins
13:39:02<@kaz>I was around *before* JAA and I do remember this project(!!)
13:39:15<@kaz>This failed for a number of reasons, mostly that it was needlessly complicated
13:40:24<@kaz>Factoring data health into the fact that nodes can go offline for extended periods of time is hard
13:41:21<@kaz>and then someone fucked up shard 13
15:55:00BearFortress quits [Client Quit]
16:14:54Mateon1 quits [Client Quit]
16:15:05Mateon1 joins