#archlinux-ports | Logs for 2025-09-01

Back
[01:27:45] <bschnei> I think you are looking for this URI: https://blog.alexanderkoch.net
[01:27:45] <phrik> Title: Running Arch Linux ARM on Hetzner Cloud (at blog.alexanderkoch.net)
[01:28:37] <bschnei> That will allow you to retrace the steps I've taken, but be aware you'll have to hack around a couple issue to bring it up to date. The boot partition is to small to simply upgrade thigns in place :/
[02:24:52] -!- marmis1 has quit [Quit: Bye!]
[02:26:16] -!- marmis1 has joined #archlinux-ports
[04:21:15] <bschnei> Another approach is trying to use tpowa's Archboot project. It "should" work because it uses the generic ALARM kernel.
[04:24:05] <bschnei> I think the default should work, but be mindful you may need to adapt kernel parameters. I spent a lot of time in the rescue console figuring out why my early arm64 kernel configs wouldn't boot.
[05:21:42] -!- drathir_tor has quit [Ping timeout: 272 seconds]
[05:22:58] -!- drathir_tor has joined #archlinux-ports
[11:16:31] -!- nl6720 has joined #archlinux-ports
[16:49:17] <gromit> Alrighty, ARM VM up and running \o/
[17:22:38] <gromit> Man it stresses me that htop is not yet available :D
[17:25:21] <anthraxx|M> Who still uses htop anyway? πŸ˜‚
[17:25:49] * Antiz hides
[17:28:04] <anthraxx|M> https://i.imgflip.com
[17:36:25] <gromit> anthraxx|M: I could live with anything ;)
[17:37:11] <Antiz> I can live with fastfetch 😎
[17:37:27] <Antiz> 😝
[17:41:08] <Solskogen> htop is available!
[17:42:08] <Solskogen> you just need the correct repos
[17:48:57] <jelle> is there a list of missing packages?
[18:13:17] <Solskogen> Yes-ish. Not publicly available, but I can fix that.
[18:17:14] <jelle> no rush, just curious
[18:18:28] <Solskogen> there are 442 (plus haskell packages) missing.
[18:19:07] <jelle> ooo not that many!
[18:19:08] <Solskogen> that includes things that doesn't make sense on aarch64 (or any other architectures than x86_64) as well, so take that number with a grain of salt
[18:19:51] <Solskogen> it also includes packages that don't build on x86_64 either.
[18:20:58] <gromit> Solskogen: what is the correct repo? We may need some documentation ;)
[18:21:02] <Solskogen> keep in mind that we don't build everything. We use the -any packages from x86_64.
[18:21:28] <Solskogen> Server = https://arch-linux-repo.drzee.net
[18:21:45] <Solskogen> the repos as called [release] and [any-testing]
[18:22:23] <Solskogen> the latter will be renamed [any]
[18:23:18] <gromit> Solskogen: aha! I was using the repo from bschnei
[18:25:02] <Solskogen> https://antarctica.no
[18:25:15] <jelle> smoll :D
[18:25:21] <jelle> first make no sense :-)
[18:25:52] <Solskogen> well, we can add them to the provides= array
[18:26:39] <jelle> provides?
[18:27:05] <Solskogen> in the pkgbuild for gcc,glibc and binutils.
[18:27:14] <jelle> no, that does not make sense
[18:28:07] <Solskogen> it's only used for rust anyways, so no biggie
[18:28:38] <jelle> we need a way to exclude things based on arch
[18:29:05] <Solskogen> isn't that way arch= is for? :-)
[18:29:12] <Solskogen> s/way/what
[18:29:13] <jelle> yea
[18:30:31] <Solskogen> where should I put up a request for supporting options_$arch= ?
[18:30:53] <Solskogen> there are some packages that doesn't like lto on aarch64
[18:32:21] <jelle> is that not liking a bug in gcc?
[18:33:04] <jelle> *linking
[18:33:54] <Solskogen> That might be.
[18:34:23] <Solskogen> we're talking about only a handfull of packages. some of them are also quite old.
[18:55:35] <gromit> Solskogen: for example?
[19:03:49] <anthraxx|M> <Solskogen> "where should I put up a request..." <- Sounds like a useful thing
[19:05:09] <gromit> also we really need to get a working version of devtools for this, no way that I'm using makechroopkg :p
[19:05:19] <gromit> manually atleast
[19:06:09] <anthraxx|M> gromit: I'm lazy. Give me arm IP to ssh into and I'll deliver 😸
[19:06:28] <jelle> lol
[19:06:50] <jelle> opt_$arch feels very specific
[19:07:45] <anthraxx|M> jelle: Yeah but I do see there will be cases and Arch specific problems like lto, there needs to be a way to switch per Arch, even if it's a bug in GCC it shouldnt stall a port ☺️
[19:08:10] <jelle> anthraxx|M: mwoah, we are early days here
[19:08:48] <anthraxx|M> Yep but imo whatever pacman/makepkg feature we see we may need it is best trying to poke on it very early. RTT is very high here πŸ₯²
[19:09:48] <gromit> anthraxx|M: we can manage that 😎
[19:10:55] <jelle> anthraxx|M: it can be a simple -j1
[19:11:53] <gromit> anthraxx|M: root@91.99.144.25
[19:12:36] <DrZee> we now have automatic PKG signing on enabled (or at least the option) it works and generates .sig files nicely when we upload a new PKG after building. right now I just use a random generated pgp key and have not pushed the public part to a pgp server. New to pgp ... how can we make a more trusted key and what would that require? I can put the Public key for download from the repo if you
[19:12:36] <DrZee> want to test .... release is not yet enabled for .sig files
[19:13:32] <gromit> DrZee: it would be good if the key is somehow bound to a hardware security thing like a tpm so it can't be extracted
[19:13:45] <gromit> DrZee: but that's not a hard requirement for a poc thing I'd say
[19:13:54] <gromit> DrZee: did you also sign all existing packages?
[19:19:28] -!- wCPO7 has joined #archlinux-ports
[19:19:30] -!- DrZee_ has joined #archlinux-ports
[19:20:05] <jelle> Solskogen: anyway, makepkg issues are reported here https://gitlab.archlinux.org
[19:20:06] <phrik> Title: Issues Β· Pacman / Pacman Β· GitLab (at gitlab.archlinux.org)
[19:20:59] -!- DrZee has quit [Ping timeout: 248 seconds]
[19:21:02] DrZee_ is now known as DrZee
[19:21:07] <DrZee> that is technically possible, but running a CloudHSM instance witch is the solution to this is costly to the tune of 800 USD month .... i can use a different mechanism where the private key is not exposed using AWS KMS service .... but right now I just went for a simple implementation of having the private key in a "file" (it technically is not we have a different mechanism in the AWS
[19:21:08] <DrZee> ecosystem but for mental model it comes close) I can encrypt it with another key though where it's an API call to get the decrypted pgp private key programmetecially where the key used to encrypted is never exposed. It's a bit hard to explain the options I have without spending hours on teaching AWS .... but there are many and they are all safe .... but right now it's BASIC proof of
[19:21:08] <DrZee> concept.
[19:21:25] -!- wCPO has quit [Read error: Connection reset by peer]
[19:21:25] wCPO7 is now known as wCPO
[19:23:49] <DrZee> gromit: I can enable it on a per repo fashion ... right now it's only enabled for staging-solskogen repo not release.... for any-testing we should probably just copy the .sig files over from arch repo or i can enable signing there too ....
[19:24:35] <DrZee> when i enable it I have a way to "flip" the repo to get .Sig generated
[19:24:38] <jelle> HSM doesn't prevent unwanted packages getting signed :)
[19:25:19] <gromit> DrZee: That is why I referred to the tpm for the VM, I think you could use something like https://docs.aws.amazon.com and https://gnupg.org
[19:25:20] <phrik> Title: Enable or stop using NitroTPM on an Amazon EC2 instance - Amazon Elastic Compute CloudEnable or stop using NitroTPM on an Amazon EC2 instance - Amazon Elastic Compute Cloud (at docs.aws.amazon.com)
[19:26:23] <DrZee> jelle: that is a different problem to solve 😊 right now we trust who uploads PKGs and the way to do the upload is tightly controlled .... so not so big a problem
[19:26:31] <gromit> DrZee: again I think for now having some random signature on the package is enough to get us through the poc stage
[19:27:27] <DrZee> gromit: the repo i build is not using a single physical server to do anything .... its 100% serverless 100% event driven .... so i dont have an instance wich tpm i can use
[19:28:08] <gromit> DrZee: oh wow, you're going hardcore AWS x)
[19:33:48] <DrZee> gromit: as mentioned i work there and find this an interesting challenge to solve 😊 .... not even the .db files are generated the traditional way. I basically generate them on request so i don't have to worry if 15 PKGs where uploaded at the same time and that they don't appear in the .db because there only is a single thread at a time that can modify the .db file .... i process in
[19:33:48] <DrZee> parallel. I even mirror arch's extra repo for performance testing πŸ˜€ this process ... and there I have to stay under the 10 second timeout default in pacman .... which I manage at least for the .db the .files take a bit longer to process ... about 30 seconds for extra
[19:35:20] <DrZee> ibthe cache them and detect if changes happened since the last cached db ... if no change happens i just serve the cached file which is instant ...
[19:38:57] <gromit> DrZee: so how do we get a directory listing for the repo? :D
[19:39:18] <DrZee> I'm open to show and share what I build ... but it requires a bit of AWS knowledge to fully appreciate. it's all done in python and what is called AWS lambda with files stored on S3 and a DynamaDB (nosql) for metadata .... cloud front (AWS CDN) is also involved although CDN caching is disabled.
[19:41:03] <DrZee> gromit: similar as the .db files i generate essentially an index.html when i generate the .db files and just have that served up as a listing when you access the repo without going for a specific file
[19:42:51] <gromit> DrZee: I already expected something like the setup you're describing, I think I'd be more interested in the builder infrastructure
[19:43:17] <gromit> DrZee: It's confusing that when visiting i.e https://arch-linux-repo.drzee.net there is an error :)
[19:43:40] <DrZee> CDN caching should be enabled for .zst PKG files and .Sig files ... but remain disabled for .db files ... then accessing the repo from around the world will be fast .... I mean on average I get to 60-80Mb/s when downloading from the repo in Europe where it is
[19:46:16] <DrZee> gromit: that is because it technically does not exist .... S3 is an object store and everything before the actual filename is just a key (a partition key actually) .... so to make os/ "browsable" I would have to put an index.html there essentially with one link in it only pointing to the next index.html in the key .... I just feel it's not necessary
[19:47:04] <DrZee> nice to have are miles down on my list πŸ˜€
[19:52:42] <DrZee> I have not (yet) worked on the PKG builder infrastructure ... and there are a number of things I need to understand... like when do I need to rebuild what in what order ..... but I was thinking to implement that too as serverless as possible .... although somewhere in there I need an instance to do the actual build .... Instances in my world are short lived I provision one (or more -
[19:52:42] <DrZee> depends on how many parallel jobs I have) when I need one and throw it away (delete) when done .... if you have a basic machine image for you is bootstrapping an instance in AWS is ridiculous easy ... that's what I do when building the x86_64 AMIs that I provide to the community (see arch wiki)
[19:58:34] <gromit> DrZee: ah alright, this means that so far all building is done manually?
[19:59:29] <DrZee> that is for Solskogen to answer he is the current "build master" but I think he has some scripts to help
[20:00:36] <DrZee> but at least he don't have to worry about adding/removing from repo .db and signing .... πŸ˜€
[20:00:58] <DrZee> that's automated
[20:01:27] <gromit> DrZee: so how is a package currently released? New file in an s3 bucket?
[20:03:20] <DrZee> pretty much ... as soon as he copies it in it kicks an evet (new file created - this also true if the same file is overwritten) .... this kicks an intake AWS lambda function that does it's magic πŸ˜€
[20:04:07] <DrZee> happy to jump on a VC one day and show it ...
[20:06:31] <gromit> DrZee: The setup is fancy, but I'm not too interested in replicating it for anything :D
[20:07:41] <gromit> We have this Simplicity thing in Arch Linux :D https://wiki.archlinux.org
[20:07:42] <phrik> Title: Arch Linux - ArchWiki (at wiki.archlinux.org)
[20:18:25] <DrZee> gromit: it is simple actually. it just feels complicated because it's a new way of doing it. I run the entire repo and the AMI build process fully automated and dont have to worry about a thing (like running out of disk space etc.) it just works .... on the AMI build setup i spend <1h/month maintaining it. All charges are made in code, pushed to a git and then deployed in minutes if
[20:18:25] <DrZee> needed. in my experience not using the tools that exist in an eco system (to keep it simple or "how you would do on your Linux at home" or because you want to be able to move between cloud vendors) you create something that is less efficient and requires more work to maintain and design .... i see that daily with the businesses I support and they struggle almost as bad a OSS project to
[20:18:25] <DrZee> find enough talent (despite paying handsomly) to maintain their system and processes
[20:25:28] <DrZee> cost is another one. Making the AMIs cost me about 6 USD month ... Nd most of that is cost for storage to store the images in the different global regions AWS has .... the compute cost (servers/VMs) is less then 1-2 USD/month .... because I don't keep severs running when I don't need them.
[20:27:52] <DrZee> with the repo hosting it's 99.99% storage cost the compute resources to do the ingestion are so miniscule.... of course one day should the repo be popular costveould shift to Datatransfer ... but processing would still be next to nothing
[20:38:33] <Solskogen> gromit: lcdproc
[20:41:14] <Solskogen> The script I have just takes a list of packages and build them in order. Call it a poor-mans-devtools :-)
[20:43:44] <gromit> Solskogen: yes but we need to improve the tooling away from hacked stuff to standard things to advanced with the overall project
[20:44:17] <Solskogen> 100% agree
[20:45:02] <Solskogen> I haven't used pkgctl that much - does it clone the package repo as well?
[20:45:18] <gromit> Solskogen: yes there is pkgctl repo clone
[20:56:15] <Solskogen> ok, making pkgctl work for aarch64 requires some changes to pkgctl it seems
[20:58:28] <anthraxx|M> Solskogen: hardly possible to be any more vague πŸ˜… error logs or it didn't happen 🀭
[21:00:03] <Solskogen> error: failed to synchronize all databases (no servers configured for repository)
[21:01:27] <anthraxx|M> Solskogen: so you try to build. yeah i already hacked on exactly that couple of days ago. bit sad as there is n issue report from felix and I also pointed out the right course for a MR but they never followed up 😿 so did it myself now.
[21:12:20] <gromit> anthraxx|M: got a link? 😏
[21:13:36] <anthraxx|M> not yet, got side tracked by devops duties, but would be neat to have a way of dev testing πŸ˜‰
[21:15:13] <gromit> anthraxx|M: > anthraxx|M: {root,alarm}@91.99.144.25
[21:16:02] <bschnei> gromit: sorry I should have clarified. The packages I build are ARMv8 compatible. All cloud servers are v8.2 so you definitely want to use the packages Solskogen builds on a VM
[21:16:05] <gromit> anthraxx|M: I'm also happy to test things if you post the MR somewhere
[21:23:09] <Antiz> gromit: May I ask for access too? I'd eventually be interested hacking on this... πŸ‘‰πŸ‘ˆ
[21:23:48] <Antiz> I have my rasp4 though, but it's ARMV8 (which isn't the final target AFAIU).
[21:25:20] <Antiz> If not, that's fine though! I can look at setting my own cloud VM alternatively ;)
[21:26:07] <gromit> Antiz: sure, added you too
[21:26:57] <Antiz> gromit: I'm in, thanks ❀️
[21:27:07] <gromit> DrZee: error: failed retrieving file 'any-testing.db' from arch-linux-repo.drzee.net : The requested URL returned error: 404 <-- is the repo currently being renamed?
[21:28:47] <DrZee> gromit: we are migrating to any .... so any-testing is offline now .... any is about halfway through to copy the files in ...
[21:29:10] <gromit> DrZee: πŸ‘ halfway through sounds good enough to me ;)
[21:29:10] <DrZee> you can use it already but some PKGs are still copying over
[21:29:29] <gromit> DrZee: Where are you mirroring from?
[21:30:07] <Antiz> Migrated the box to [any] ;)
[21:30:30] <DrZee> right now from "myself" solskogen has a script that he uses to sync but he is zzz now πŸ™‚
[21:31:06] <gromit> DrZee: what is "myself"? Are you also hosting a mirror?
[21:31:20] <DrZee> I tried to use the RSS feed s a sync trigger .... but it's not working ideal
[21:31:39] <DrZee> I just copy over from any-testing
[21:33:09] <gromit> DrZee: is the release db file also recreated for new packages added to any?
[21:33:35] <DrZee> no any and release are two separate repos
[21:33:38] <gromit> DrZee: The rss feed is not neccesarily a good interface for that, an actual sync database or the state repo would make more sense
[21:34:20] <Antiz> Getting 403 from the repo now, is it expected?
[21:35:34] <gromit> Antiz: what are you looking at?
[21:36:15] <Antiz> Browsing https://arch-linux-repo.drzee.net or trying to install a package gives 403
[21:36:17] <DrZee> I was trying to use the RSS feed to look for updated packages in Arch ... and it works ... but I have to poll it too often if a lot updates happen it only gives me the last 50 .... and if I poll every 5 min and you have more then 50 in that time i loose som ...
[21:36:32] <Antiz> e.g. error: failed retrieving file 'yyjson-0.12.0-1-aarch64.pkg.tar.zst.sig' from arch-linux-repo.drzee.net : The requested URL returned error: 403
[21:37:52] <DrZee> Antiz: you can browse just /arch .... you have to go all the way to the repo itself with /arch/any/os/aarch64/
[21:38:05] <gromit> DrZee: did you see my message above? There are other interfaces (db file, state repo) that you should prefer
[21:38:33] <Antiz> DrZee: https://arch-linux-repo.drzee.net <-- 403 too
[21:39:46] <DrZee> remember the / at the end ...
[21:39:57] <Antiz> Ah oopsie, thanks :D
[21:43:03] <DrZee> that's any-testing migrated 5021 PKGs... 23GB
[21:50:36] <Antiz> DrZee: Ah pacman complains about the missing .sig files in the repo. Didn't you say you have that now?
[21:51:13] <DrZee> so we now have release (no Sig files yet), any (where we will sync in the sig files from x86 on the next sync TMRW) and staging-solskogen (where we already generate .sig files with a "dummy" key. The public key for testing can be found at https://arch-linux-repo.drzee.net
[21:52:20] <Antiz> DrZee: Alright, thanks!
[21:52:30] <DrZee> Antiz: no not all sig files are synced solskogen only synced the PKGs but never the sig ... he just change his code tonight to also sync .sig ... but had to stop that as i migrated any-testing to any
[21:53:06] <DrZee> he will sync again tmrw... the any should have all sig files
[21:53:47] <Antiz> DrZee: Alright, no problem. Just wanted to know if it was expected or not. Thanks for the info and the work πŸ€—
[22:10:17] -!- titus_livius has joined #archlinux-ports