#archlinux-ports | Logs for 2017-06-26

Back
[00:21:15] -!- eschwartz has quit [Remote host closed the connection]
[00:22:56] -!- eschwartz has joined #archlinux-ports
[00:46:57] -!- eschwartz has quit [Ping timeout: 240 seconds]
[00:49:20] -!- guys has quit [Ping timeout: 260 seconds]
[00:57:48] -!- guys has joined #archlinux-ports
[01:18:04] -!- eschwartz has joined #archlinux-ports
[01:19:37] -!- eschwartz has quit [Client Quit]
[01:27:59] -!- eschwartz has joined #archlinux-ports
[01:28:11] -!- eschwartz has quit [Read error: Connection reset by peer]
[01:45:45] -!- eschwartz has joined #archlinux-ports
[02:46:43] -!- eschwartz has quit [Ping timeout: 260 seconds]
[03:01:23] -!- eschwartz has joined #archlinux-ports
[05:36:33] -!- p71 has quit [Read error: Connection reset by peer]
[05:44:24] -!- isseigx has joined #archlinux-ports
[06:07:13] -!- p71 has joined #archlinux-ports
[06:21:05] -!- dmakeyev_ has joined #archlinux-ports
[06:40:26] -!- deep42thought has joined #archlinux-ports
[06:47:54] -!- isseigx has quit [Quit: Leaving]
[07:09:42] -!- deep42thought has quit [Remote host closed the connection]
[07:42:21] -!- eschwartz has quit [Remote host closed the connection]
[07:48:22] -!- eschwartz has joined #archlinux-ports
[07:50:35] -!- eschwartz has quit [Remote host closed the connection]
[07:50:56] -!- eschwartz has joined #archlinux-ports
[08:01:47] -!- eschwartz has quit [Remote host closed the connection]
[08:02:37] -!- eschwartz has joined #archlinux-ports
[08:12:06] -!- eschwartz has quit [Remote host closed the connection]
[08:13:46] -!- eschwartz has joined #archlinux-ports
[08:14:21] -!- deep42thought has joined #archlinux-ports
[08:18:24] -!- isacdaavid has quit [Quit: isacdaavid]
[08:38:16] -!- eschwartz has quit [Remote host closed the connection]
[08:39:31] -!- eschwartz has joined #archlinux-ports
[08:44:49] -!- dmakeyev_ has quit [Ping timeout: 255 seconds]
[08:51:24] -!- eschwartz has quit [Remote host closed the connection]
[08:54:33] -!- eschwartz has joined #archlinux-ports
[09:38:32] -!- eschwartz has quit [Remote host closed the connection]
[09:41:07] -!- eschwartz has joined #archlinux-ports
[09:42:31] -!- eschwartz has quit [Remote host closed the connection]
[09:42:48] -!- eschwartz has joined #archlinux-ports
[09:45:25] -!- Faalagorn has quit [Ping timeout: 240 seconds]
[10:00:48] -!- Faalagorn has joined #archlinux-ports
[10:08:52] -!- deep42thought has quit [Remote host closed the connection]
[10:12:06] -!- deep42thought has joined #archlinux-ports
[10:14:55] -!- dmakeyev has joined #archlinux-ports
[10:27:48] -!- eschwartz has quit [Remote host closed the connection]
[10:30:26] -!- eschwartz has joined #archlinux-ports
[11:01:08] -!- eschwartz has quit [Ping timeout: 260 seconds]
[11:02:55] -!- guys has quit [Ping timeout: 240 seconds]
[12:23:47] <deep42thought> tyzoid: I just uploaded a new archiso into testing/ - feel free to test (or maybe I'll have a look into vagrant now)
[12:43:37] -!- guys has joined #archlinux-ports
[13:52:15] -!- guys has quit [Quit: A random quit message]
[13:53:56] -!- guys has joined #archlinux-ports
[13:58:42] -!- eschwartz has joined #archlinux-ports
[13:59:36] -!- eschwartz[m] has quit [Changing host]
[13:59:36] -!- eschwartz[m] has joined #archlinux-ports
[13:59:36] -!- eschwartz[m] has quit [Changing host]
[13:59:36] -!- eschwartz[m] has joined #archlinux-ports
[14:10:12] <Polichronucci> deep42thought: the testing i686 image boots for me, I ran them in a qemu VM on a x86_64 host
[14:10:27] <deep42thought> nice :-)
[14:10:49] <deep42thought> Polichronucci: btw, I added a download page to the website
[14:11:54] <Polichronucci> ok perfect
[15:19:30] tyzoid is now known as tyzoid|afk
[15:19:34] tyzoid|afk is now known as tyzoid|away
[15:42:33] -!- dmakeyev has quit [Ping timeout: 260 seconds]
[16:26:04] -!- tyzoid has joined #archlinux-ports
[16:26:11] <tyzoid> hey deep42thought, I'm back
[16:26:19] <deep42thought> welcome back
[16:26:40] <tyzoid> looks like my ssh tunnel broke :/
[16:26:47] <tyzoid> so I don't have access to my pgp keys today
[16:26:58] <tyzoid> not like I was going to do much with 'em anyway
[16:27:58] <deep42thought> reverse ssh tunnels are somewhat unstable - I exchanged mine for a vpn
[16:28:06] <tyzoid> ehh
[16:28:12] <tyzoid> I've got a command called everssh
[16:28:23] <tyzoid> It's a script I wrote that monitors and restarts it as need be
[16:28:41] <tyzoid> since ssh can automatically reconnect if the time is short enough, it usually keeps all connections open
[16:28:49] <tyzoid> but reboot caused that to fail
[16:28:55] <tyzoid> so... gotta add that as a cron job now
[16:29:19] <tyzoid> anyway, did Polichronucci want access to the site?
[16:29:48] <deep42thought> no, he doubts that he is a big help with the forum and news site
[16:30:23] <tyzoid> okay
[16:32:02] <deep42thought> I would like to change the current dependency scheme from "do not build if makedependencies are still to be built" to "do not build if runtimedependencies are still to be built"
[16:32:51] <tyzoid> are there still loops in runtime dependencies?
[16:32:56] <deep42thought> yes
[16:33:00] <tyzoid> fewer?
[16:33:06] <deep42thought> yes
[16:33:12] <deep42thought> but I think we miss some connections
[16:33:16] <deep42thought> which should not be missed
[16:33:31] <tyzoid> I don't have an objection to it, but I'm not sure how it would affect the stability of the packages
[16:33:36] <tyzoid> We'd need to do some testing first
[16:34:17] <deep42thought> well, currently there are ~1k broken packages ...
[16:34:26] <deep42thought> I think we can't top that ...
[16:34:30] <tyzoid> I'm not talking about broken packages
[16:34:36] <tyzoid> I'm talking about unstable packages
[16:34:43] <tyzoid> i.e. ones that don't run right when installed
[16:35:02] <deep42thought> this should only get better if you only consider runtime dependencies
[16:35:24] <deep42thought> e.g. it's compiled with an (potentially) old compiler, but uses new libraries vs. vice versa
[16:35:35] <tyzoid> Wouldn't it be possible to break ABI compatability but keep API?
[16:35:52] <tyzoid> That's what I'm worried about
[16:36:07] <tyzoid> Granted, it's a small problem, if it exists
[16:36:11] <tyzoid> but that could be one pitfall
[16:37:15] <tyzoid> damn, editing a presentation in LibreOffice, and my instinct is still to <esc> :w
[16:46:54] <tyzoid> deep42thought: Any thoughts?
[16:47:43] <deep42thought> I guess, I simply don't understand the topic completely
[16:49:04] <deep42thought> if you compile a library, any libraries you depend on, will be runtime dependencies, too - so they build "in the right order"
[16:49:23] <deep42thought> if you compile a binary, all needed libraries and binaries are runtime dependencies, too
[16:49:38] <deep42thought> so how would you create some error which only shows up at runtime?
[16:50:44] <tyzoid> I guess you're right, then
[16:50:53] <tyzoid> It'll either be statically linked, in which case it doesn't matter
[16:51:02] <tyzoid> or dynamically linked, in which case it'll be a runtime dep
[16:54:02] <deep42thought> ok, I have to leave now
[16:54:10] <deep42thought> probably I'll be back online in ~1-2h
[16:54:33] <tyzoid> alright, sounds good
[16:54:51] -!- deep42thought has quit [Quit: Leaving.]
[17:05:10] <tyzoid> More food for thought, do we want to make a .torrent available for the .iso files like the main archlinux project does?
[17:05:16] -!- p71 has quit [Remote host closed the connection]
[17:05:54] <tyzoid> It uses webseeds, which should be pretty easy to set up, since we already have some mirrors in place
[17:14:45] -!- dmakeyev has joined #archlinux-ports
[18:39:16] -!- deep42thought has joined #archlinux-ports
[18:39:39] <tyzoid> wb deep42thought
[18:40:10] <deep42thought> Hi
[18:40:24] <tyzoid> I was not expecting a capitol letter there :)
[18:40:28] <tyzoid> Threw me off my game.
[18:40:45] <deep42thought> sry, I'm German - We write a Lot with capital Letters ;-)
[18:41:17] <tyzoid> No problem. Anyway, I'm thinking it might be nice to set up a .torrent for the isos
[18:42:02] <deep42thought> yeah, feel free to get something running (I'm clueless, currently)
[18:42:30] <tyzoid> Sure. What's your thought of scripting a solution using the sftp tunnel you gave me?
[18:42:42] <tyzoid> Would you prefer I not automate accesses to that?
[18:44:41] <deep42thought> you can script it - but in my experience sftp tunnels are unreliable (like ssh tunnels)
[18:45:00] <deep42thought> btw: I'm afk for lunch for a few ten minutes
[18:45:00] <tyzoid> My machine has rsa keys to access it without a password
[18:45:06] <tyzoid> so I can script that pretty easily
[18:45:12] <tyzoid> just launch a new connection whenever I need
[18:45:34] -!- tyzoid|away has quit [Ping timeout: 255 seconds]
[18:46:18] <tyzoid> and that's a really late lunch :)
[18:49:16] <deep42thought> ah, sry I meant dinner
[18:49:23] <tyzoid> Figured
[18:49:39] <tyzoid> No worries :)
[18:59:17] -!- tyzoid|away has joined #archlinux-ports
[19:11:53] <deep42thought> I tried to make a torrent for the iso once, but failed to find a tracker for it
[19:12:22] <tyzoid> Ah, there shouldn't need to be a tracker
[19:12:27] <deep42thought> Then I looked at archlinux' torrent and they seem to run their own tracke (for the iso), but I haven't got deeper into that part
[19:12:35] <deep42thought> you do need a tracker
[19:12:40] <deep42thought> but you can run it yourself
[19:12:47] <tyzoid> Just a simple php script on a web server
[19:12:51] <tyzoid> not a full-blown one
[19:12:53] <rewbycraft> Can't we ask arch if we can do that?
[19:12:56] <rewbycraft> We
[19:12:57] <rewbycraft> *Er
[19:13:00] <rewbycraft> If we can use theirs
[19:13:02] <rewbycraft> ?
[19:13:07] <tyzoid> possible
[19:13:22] <deep42thought> brtln ^
[19:21:46] <brtln> what does it take?
[19:21:53] <brtln> you need to e-mail Pierre, most likely
[19:22:04] <brtln> yes, you have my agreement, but I'm not the one who handles that
[19:23:02] <brtln> looks like we run hefurd for that
[19:23:55] <brtln> if Pierre doesn't get back to you in 3-6 days, I'll handle it
[19:24:51] <deep42thought> tyzoid: do you write that mail? Do you know what exactly we need?
[19:25:28] <tyzoid> I've looked at the protocol, not so much the software
[19:25:49] <tyzoid> We were looking at this interally at the company I work for, so we were looking at custom software
[19:26:10] <tyzoid> Afaik, it's not hard to set up, but that's coming from someone who hasn't done it with existing produciton software
[19:26:27] <tyzoid> so in short: I don't know exactly what we need.
[19:27:24] <tyzoid> There are also existing public trackers that we could use instead too
[19:40:36] <tyzoid> hey deep42thought, if you're still on
[19:40:44] <deep42thought> I am
[19:41:00] <tyzoid> can you see if you can access the backups folder inside of the sftp server for bbs-archlinux32?
[19:41:08] <tyzoid> That's the dump of the mysql database, taken weekly
[19:41:33] <deep42thought> sha512sum: 414f4ea8aa54511a0bb04b54ef0c709c35dcb11443d342b915dc0edc37b6342136135510c9a1a15644056d7875772124f26e6790ea83394d09d60b2c15afb3b6
[19:41:37] <deep42thought> I can
[19:41:46] <tyzoid> :)
[19:41:49] <tyzoid> sweet
[19:41:59] <tyzoid> that should be the last piece to recreate the forum if necessary
[19:42:17] <deep42thought> do you backup this somewhere or should I do so?
[19:42:24] <tyzoid> I was just about to say
[19:42:33] <tyzoid> we should have some sort of internal backup infrastructure set up
[19:43:20] <rewbycraft> How would you propose to do that?
[19:43:31] <tyzoid> Ideally it'd be a mount where we could throw files onto it, and it'd distribute to a second machine
[19:43:35] <deep42thought> each server operator backups whatever is on his server
[19:43:44] <rewbycraft> That seems like an idea
[19:43:52] <deep42thought> which one?
[19:43:55] <rewbycraft> Everybody's responsible for their own backups makes sense
[19:44:10] <tyzoid> deep42thought: Yeah, but the problem of doing their own backup is if the person is away, then we're left without any data for that service
[19:44:19] <tyzoid> this ensures that bus factor isn't a problem
[19:44:30] <deep42thought> tyzoid: right
[19:44:44] <tyzoid> I'm all in favor of people being responsible and taking their own backups
[19:44:45] <rewbycraft> I would suggest using some distributed filesystem
[19:44:47] <tyzoid> this is just for availablilty
[19:44:57] <rewbycraft> And we could just backup to the distributed fs
[19:45:06] <rewbycraft> And we'd all be equally responsible for some of the backup
[19:45:12] <tyzoid> I wonder if it's possible to create a mount to a redis cluster?
[19:45:19] <tyzoid> redis afaik has persistant storage for binary blobs
[19:45:20] <rewbycraft> I was thinking glusterfs?
[19:45:43] <tyzoid> rewbycraft: Does glusterfs support having duplicate copies?
[19:45:55] <tyzoid> or is it single-point-of-storage?
[19:46:00] <rewbycraft> It does replicated
[19:46:04] <rewbycraft> https://gluster.readthedocs.io
[19:46:05] <phrik> Title: Setting Up Volumes - Gluster Docs (at gluster.readthedocs.io)
[19:46:13] <rewbycraft> It says under "Volumes of the following types can be created"
[19:46:21] <deep42thought> glusterfs is good, as long as you don't have too many small files
[19:46:37] <tyzoid> what does gluster have against small files?
[19:46:38] <rewbycraft> Hmh. But it would fill the niche of "shared backup burden"
[19:46:53] <deep42thought> nothing, it's performance just drops for many of them
[19:47:07] <deep42thought> (probably also for many large ones, but then you won't notice anyway)
[19:47:10] <tyzoid> only if large amounts are actively being created, no?
[19:47:14] <deep42thought> s/it's/its/
[19:47:36] <deep42thought> rewbycraft: glusterfs sounds perfect for me
[19:47:55] <rewbycraft> I figure, that way we've got highly available, replicated backups
[19:48:21] <rewbycraft> If you wanted to be real crazy, you could probably store the master package mirror on that
[19:48:22] <deep42thought> a colleaque has his home on glusterfs and always complains that it takes some time to e.g. open firefox due to fs lag
[19:48:37] <tyzoid> I doubt we'd need the master package mirror there
[19:48:41] <tyzoid> because it's repliated alreayd
[19:48:44] <rewbycraft> True
[19:48:44] <tyzoid> replicated*
[19:48:45] <deep42thought> rewbycraft: It's not that crazy, I think
[19:48:53] <rewbycraft> Package files aren't that big
[19:48:56] <rewbycraft> So It's a little crazy
[19:49:01] <tyzoid> deep42thought: If the master goes down, you've got slaves that are about half an hour behind at worst
[19:49:11] <deep42thought> well, if something breaks and everyone syncs off it, it doesn't help ...
[19:49:29] <rewbycraft> Ican look into setting up gluster tonight
[19:49:31] <tyzoid> deep42thought: You've got 2 (3?) independent servers mirroring yours
[19:49:33] <rewbycraft> Gonna make dinner first through
[19:49:37] <rewbycraft> *though
[19:49:37] <tyzoid> rewbycraft: Sounds good
[19:49:38] <deep42thought> it protects against different kind of failures
[19:49:42] <tyzoid> deep42thought: such as?
[19:49:56] <deep42thought> mirror failure or mirror content failure
[19:49:59] <tyzoid> The only failure mode I can see that would be a problem is if the master decides to delete everything
[19:50:09] <deep42thought> or break everything
[19:50:14] <deep42thought> as I usually do :-/
[19:50:20] <tyzoid> mirror failure would only disrupt one mirror
[19:50:53] <rewbycraft> Anyway, I'll experiment with a few vms and write up a nice guide on the forum for you guys to join the gluster
[19:51:05] <rewbycraft> And then we can just backup services to the gluster as needed
[19:51:27] <rewbycraft> Does that sound like a plan?
[19:51:27] <tyzoid> rewbycraft: Does gluster allow asymmetric storage?
[19:51:36] <rewbycraft> I believe so
[19:51:37] <tyzoid> i.e. where one server has 50G available, but another has 100?
[19:52:01] <tyzoid> That'd alleviate my problem, since my large drives are on my personal machine behind a firewall
[19:52:11] <rewbycraft> As I said, I'll experiment on a few vms
[19:52:14] <tyzoid> sounds good
[19:53:34] <deep42thought> sounds perfect
[19:53:55] <rewbycraft> Another option'd be ipfs
[19:53:59] <rewbycraft> But I'll do some throughput tests
[19:54:04] <rewbycraft> And see what works best
[19:54:06] <deep42thought> I don't know ipfs
[19:54:14] <tyzoid> There was also a solution I was working on, which I think we discussed privately, rewbycraft
[19:54:24] <rewbycraft> Yeah. But that's not ready "now"
[19:54:30] <tyzoid> but it was more of a personal project that I haven't gotten anywhere close yet
[19:54:32] <tyzoid> yeah
[19:54:49] <rewbycraft> So I'm just gonna throw up a few arch vms on my cluster
[19:54:56] <rewbycraft> And see how it does
[19:55:51] <tyzoid> rewbycraft: We'd also need to check higher latency between servers
[19:56:02] <tyzoid> since I'm over here in the US and you guys are out in the EU
[19:56:17] <rewbycraft> I know ipfs is designed to handle latency
[19:56:23] <rewbycraft> Not sure about gluster
[19:56:31] <rewbycraft> Again, I'll sit down and evaluate *after* dinner
[19:56:45] <rewbycraft> IPFS: https://ipfs.io
[19:56:57] <tyzoid> yeah
[19:57:01] <tyzoid> go enjoy your food :)
[19:57:17] <tyzoid> I'm not leaving soon :P
[20:00:32] <rewbycraft> Alternatively tahoe lafs is an option. I know a few people that use it so I can ask 'em
[20:04:44] <tyzoid> tahoe lafs seems to be more useful for single-user encrypted file storage across a network
[20:05:09] <rewbycraft> As I said, evaluating options
[20:05:17] <tyzoid> In this case, we 'trust' the other hosts (with hash verification) - and we actively want all hosts to have read/write access
[20:05:23] <rewbycraft> Hmh
[20:05:32] <rewbycraft> gluster doesn't like high latency
[20:05:39] <rewbycraft> But ipfs doesn't do backup stuff well
[20:05:52] <rewbycraft> So am looking at alternatives
[20:06:02] <tyzoid> The other thing is if we want to setup read/write only for the origin server and read-only for the other servers
[20:06:12] <tyzoid> tahoe might work for that, but seems to have a higher setup cost
[20:06:19] <rewbycraft> As I said, investigating
[20:06:28] <tyzoid> yup, just thinking into the channel
[20:06:36] <rewbycraft> Me too
[20:06:36] <tyzoid> hoping my thoughts are useful :0
[20:06:40] <tyzoid> :)*
[20:10:18] -!- p71 has joined #archlinux-ports
[20:45:50] <rewbycraft> tyzoid: After some reviewing, it seems that neigher gluster nor ipfs really works for our use-case. I'm currently investigating infinit.sh
[20:45:53] <rewbycraft> Opinions?
[20:47:02] <tyzoid> I think that tahoe lafs might work
[20:47:09] <tyzoid> it'd be a bit more work to set up
[20:47:13] <tyzoid> but I think it should work for us
[20:47:20] <tyzoid> not sure about the latency issues
[20:47:20] <rewbycraft> Also look at infinit.sh
[20:47:28] <rewbycraft> It seems to be designed to handle assymetric storage
[20:47:34] <tyzoid> exactly
[20:47:40] <tyzoid> that's what turned me off it at first
[20:47:46] <tyzoid> but then I went "oh, wait"
[20:47:49] <rewbycraft> Hm?
[20:47:54] <rewbycraft> What do you mean?
[20:48:03] <tyzoid> I thought it was too heavy handed to do access control
[20:48:11] <rewbycraft> About which program are we talking
[20:48:13] <tyzoid> but then enforcing read/write permissions could be helpful
[20:48:22] <tyzoid> tahoe lafs
[20:48:31] <rewbycraft> Take a look at this: https://infinit.sh
[20:48:33] <phrik> Title: Infinit Storage Platform (at infinit.sh)
[20:49:23] <tyzoid> The thing is that this is for backups
[20:49:33] <rewbycraft> So?
[20:50:27] <tyzoid> So out of CAP, we only need the AP
[20:50:38] <rewbycraft> CAP?
[20:50:39] <tyzoid> with only minimal A
[20:50:44] <tyzoid> https://en.wikipedia.org
[20:50:46] <phrik> Title: CAP theorem - Wikipedia (at en.wikipedia.org)
[20:50:57] <rewbycraft> Ah that
[20:50:57] <tyzoid> the C is useful, but not absolutely necessary
[20:51:08] <tyzoid> it's granted by virtue of the amount of time given by syncing
[20:51:34] <tyzoid> The fact is we're not going to be using this as a high performance system
[20:51:39] <tyzoid> reads will be rare
[20:51:43] <tyzoid> writes will be common
[20:52:35] <tyzoid> so we don't necessarily need a general-purpose networked filesystem
[20:52:39] <tyzoid> though that would be nice
[20:53:28] <tyzoid> also, rewbycraft, do you have keybase yet?
[20:53:32] <rewbycraft> I do
[20:53:38] <tyzoid> is your handle rewbycraft?
[20:53:42] <rewbycraft> It is
[20:53:47] <rewbycraft> I'm consistent in my usernames
[20:54:06] <tyzoid> let me know if you got my message
[20:54:20] <rewbycraft> I did
[20:55:18] <tyzoid> Alright, just followed you on keybase
[20:55:43] <tyzoid> have you used the /keybase filesystem yet?
[20:56:07] <rewbycraft> Nop
[20:56:14] <tyzoid> It's pretty cool
[20:56:20] <tyzoid> It is a centralized service
[20:56:32] <tyzoid> but it stores all files either encrypted or signed (depending on public/private)
[20:56:38] <tyzoid> and it supports shared folders
[20:56:50] <rewbycraft> That doesn't quite help our usecase though
[20:57:13] <tyzoid> right, but more operationally, it helps use securely send files to each other
[20:57:29] <tyzoid> Or I could keep making sftp accounts on my server :)
[20:57:32] <rewbycraft> Oh. I tend to just gpg encrypt the files
[20:57:39] <rewbycraft> And then send the encrypted files
[20:57:46] <tyzoid> That works too
[20:57:48] <tyzoid> but this is seamless
[20:57:49] <deep42thought> my approach, too
[20:57:56] <tyzoid> just put the file there, and it gets where it needs to go
[20:58:02] <rewbycraft> You can pull my key from keybase or the MIT keyserver
[20:58:05] <rewbycraft> So eh
[21:01:51] <rewbycraft> tyzoid: My problem with tahoe is that their FUSE stuff is awfully slow last I heard
[21:02:10] <tyzoid> fuse does have a tendency to be slow if using poor bindings or the inneficient bindings
[21:02:19] <tyzoid> esp. if they don't use much caching
[21:02:35] <rewbycraft> Apparently the problenm is tahoe doesn't like linux's access pattern
[21:02:46] <rewbycraft> And it doesn't support ipv6
[21:02:54] <rewbycraft> Which is something that just annoys me
[21:03:24] <deep42thought> my storage won't be accessible via ipv6 either - my university is stuck in the 90's
[21:03:35] <rewbycraft> Boo
[21:03:36] <deep42thought> (same goes for mirror.archlinux32.org)
[21:03:37] <tyzoid> My server is ipv4 only too
[21:03:54] <rewbycraft> ... Am I the only one that does v6. Really.
[21:04:01] <deep42thought> I have ipv6 at home
[21:04:05] <tyzoid> same
[21:04:18] <deep42thought> and on my virtual server
[21:04:18] <rewbycraft> Ironically, my home ISP doesn't do ipv6
[21:04:22] <tyzoid> lol
[21:04:23] <deep42thought> lol
[21:04:23] <rewbycraft> But I tunnel some stuff in
[21:04:33] <rewbycraft> I've got 4ms to my AMS router
[21:04:36] <rewbycraft> So that works
[21:04:49] <tyzoid> hey, can someone check if 2601:40d:4300:9f70:1ad6:c7ff:fe0f:746a has port 22 open?
[21:04:57] <tyzoid> I don't have ip6 here at work
[21:05:45] <deep42thought> it's down
[21:05:51] <rewbycraft> No ping response
[21:05:51] <deep42thought> (ping)
[21:05:54] <rewbycraft> I'm checking the port anyway
[21:05:55] <tyzoid> thanks
[21:06:09] <tyzoid> if it's not responding to ping, it's probably down/been firewalled away
[21:06:19] <rewbycraft> What's that ip?
[21:06:34] <tyzoid> It's the ip my home machine is connecting to IRC via
[21:06:39] <rewbycraft> Ah
[21:06:57] <deep42thought> you're still online via this ip
[21:06:59] <tyzoid> if you check the logs, tyzoid|afk connected about 3 hours ago on it
[21:07:13] <tyzoid> deep42thought: Yeah, that's why I think the firewall is blockign it
[21:07:15] <tyzoid> blockign*
[21:07:17] <tyzoid> blocking*
[21:08:23] <deep42thought> How much backup storage are we talking about, anyway?
[21:08:31] <rewbycraft> "enough"
[21:08:39] <deep42thought> 1EB?
[21:08:50] <tyzoid> My company doesn't even have that much
[21:10:23] <tyzoid> We're barely at 100PB
[21:10:59] <deep42thought> no, honestly, will it be overkill if I add a 4TB usb disk to any box on the internet?
[21:12:07] <rewbycraft> Um. Probably very
[21:12:08] <tyzoid> Not sure what that'd give us
[21:12:15] <tyzoid> and usb disks are notoriously slow
[21:12:24] <tyzoid> I used to do that with a 3T disk
[21:12:24] <deep42thought> I know, but they're flexible
[21:12:34] <tyzoid> and decided to get a 4T internal one instead
[21:14:53] <rewbycraft> One moment please. I'm hyping out about something
[21:27:13] <rewbycraft> I've got some idiots to deal with...
[21:29:01] -!- deep42thought has quit [Ping timeout: 246 seconds]
[21:29:39] <rewbycraft> I might not be able to get this looked into today. Some people did a massive stupid and I have to fix it
[21:32:38] -!- deep42thought has joined #archlinux-ports
[21:34:34] <tyzoid> alright, have fun then
[22:28:22] -!- isacdaavid has joined #archlinux-ports
[22:46:35] -!- eschwartz has quit [Remote host closed the connection]
[22:46:43] -!- dmakeyev has quit [Ping timeout: 246 seconds]
[22:47:42] -!- eschwartz has joined #archlinux-ports
[22:48:02] -!- eschwartz has quit [Read error: Connection reset by peer]
[22:48:48] -!- eschwartz has joined #archlinux-ports
[22:49:04] -!- eschwartz has quit [Read error: Connection reset by peer]
[22:54:52] -!- eschwartz has joined #archlinux-ports
[22:55:37] -!- eschwartz has quit [Read error: Connection reset by peer]
[22:59:33] <tyzoid> I'll check the log if there's anything new
[22:59:37] <tyzoid> but I'm heading out for now
[22:59:52] <tyzoid> I assume most everyone's already asleep.
[23:00:01] <deep42thought> I'm not
[23:00:02] <deep42thought> ;-)
[23:00:05] <tyzoid> lol
[23:00:20] <tyzoid> deep42thought: Anything you want me to look at tonight?
[23:00:25] -!- eschwartz has joined #archlinux-ports
[23:00:29] <deep42thought> nah
[23:00:34] <tyzoid> I'm planning on looking at testing infrastructure if not
[23:00:41] <tyzoid> alright, sounds good
[23:00:43] <deep42thought> todos are automatic testing and torrent for the iso
[23:00:48] <deep42thought> but no pressure
[23:00:55] <deep42thought> :-)
[23:01:29] <tyzoid> Alright. Catch ya later
[23:01:38] <tyzoid> Maybe even in the morning for you, if I'm up late
[23:01:54] -!- tyzoid has quit [Quit: WeeChat 1.8]
[23:50:57] -!- deep42thought has quit [Remote host closed the connection]
[23:57:50] -!- kerberizer has joined #archlinux-ports