#archlinux32 | Logs for 2023-07-06

Back
[00:57:14] -!- lithiumpt has quit [Ping timeout: 246 seconds]
[03:08:59] -!- ConSiGno has quit [Ping timeout: 246 seconds]
[05:50:27] -!- titus_livius has joined #archlinux32
[06:32:18] -!- drathir_tor has quit [Ping timeout: 240 seconds]
[06:52:36] -!- abaumann has joined #archlinux32
[06:52:37] <buildmaster> Hi abaumann!
[06:52:37] <buildmaster> !rq abaumann
[06:52:38] <phrik> buildmaster: <abaumann> storing old motherboards: pro-tip, always remove all batteries _before_ storing them
[06:53:00] <abaumann> so, one get-package-updates succeeded and we have no _more_ packages to build. :-)
[06:53:23] <abaumann> I'm fighting with flocks, somehow things are constantly locked and nothing moves forward.
[06:54:15] <abaumann> 0 S master 1549375 1549225 0 80 0 - 1623 do_wai Jul05 ? 00:00:33 /bin/sh /home/master/builder/bin/return-assignment perl-sys-virt 5a63aaa286ff437730636f2f62d25f2b46f22f24 0000000000000000000000000000000000000000 extra i486 1
[06:55:02] <abaumann> spwans every second
[06:56:40] <abaumann> there are many sleeps which are not randomized
[06:57:32] <abaumann> I would actually put wait_some_time everywhere
[07:01:33] <abaumann> aha, all build slaves are in state uploading and are waiting for something..
[07:07:38] <abaumann> this looks like a deadlock to me
[07:13:25] <abaumann> a logical one, there is a dependency cycle in the script logic
[07:14:58] <abaumann> I killed now return-assignment perl-sys-virt, this was from a virtual machine which was shut down..
[07:19:34] <abaumann> and stuck again..
[07:24:18] <abaumann> "There is something intented for 90094 minutes."
[07:24:23] <abaumann> uh, this can't be good
[07:24:31] <abaumann> intention.10 intention.6 intention.7 intention.8 intention.9
[07:24:38] <abaumann> yep. tons of intensions lying around
[07:25:33] <abaumann> all of them those return-assignments which are stuck.
[07:25:42] <abaumann> so I delete them and let's see if things recover.
[07:27:16] <abaumann> I also think that the forced packages to rechenknecht just got stuck, I remove it, let's build on the whole cluster
[07:29:15] <abaumann> aha, full power back on the build slaves :-)
[07:29:24] -!- lithiumpt has joined #archlinux32
[07:33:29] <abaumann> I think, in order to reenable 'build-support' I have to ignore the upstream-package cache mechanism in this case
[07:34:36] <abaumann> it's fair to assume that build-support packages are new or customized full PKGBUILDs
[07:34:41] <abaumann> and not diff-PKGBUILDs
[08:05:52] <abaumann> [abaumann@eurox bin]$ ./get-source-info python-bootstrap build-support 0000000000000000000000000000000000000000 965a0f498e125db1e5f05c928ababbce4944a867
[08:05:55] <abaumann> fatal: not a tree object: 965a0f498e125db1e5f05c928ababbce4944a867
[08:05:58] <abaumann> fatal: not a tree object: 965a0f498e125db1e5f05c928ababbce4944a867
[08:06:00] <abaumann> tar: This does not look like a tar archive
[08:06:04] <abaumann> though 965a0f498e125db1e5f05c928ababbce4944a867 is the last commit in packages32?
[08:09:48] <abaumann> some scripts are running with stale git repos?
[08:10:22] <abaumann> yep, work/repos/packages32 is not updated..
[08:15:54] <abaumann> I would have expected to get-update-packages updates the state upstream repo and the packages32 repo..
[08:17:23] <abaumann> pull=true per default, ok
[08:18:17] <abaumann> aeh, work/repos on the buildmaster has only releng?
[08:18:35] <abaumann> oh, they are now in in ~/master?
[08:21:03] <abaumann> fatal: pathspec 'build-support/python-bootstrap/PKGBUILD' did not match any files
[08:30:07] <abaumann> the config files are a complete mess.
[08:30:12] <abaumann> repo_paths__packages="/home/master/packages64"
[08:30:12] <abaumann> repo_paths__community="/home/master/community64"
[08:30:12] <abaumann> repo_paths__archlinux32="/home/master/packages32"
[08:30:16] <abaumann> in common.conf on the buildmaster
[08:31:04] <abaumann> https://github.com
[08:31:08] <abaumann> this looks obsolete
[08:31:35] <abaumann> there is a slave.conf with a key
[08:33:08] <abaumann> this could explain the really strange package revisions I see lately
[08:33:37] <abaumann> this also means that we can restart rebuilding
[08:33:57] <abaumann> error: unable to write file ./objects/d6/93c94585b61475a39b4f6a74b14bdf12afa70b: No such file or directory
[08:34:07] <abaumann> splendid, it writes into the current directltory
[08:34:25] <abaumann> repo_paths__archlinux32
[08:34:31] <abaumann> ok, that makes sort of sense
[08:35:14] <abaumann> slave.conf.example:#repo_paths__archlinux32="${work_dir}/repos/packages32"
[08:35:21] <abaumann> only for the slave configuration, not for the master
[08:35:23] <abaumann> *sigh*
[08:36:43] <abaumann> slowly I start to get pretty annoyed about the status..
[08:47:25] <abaumann> git location are hard-coded, location of work repo dirs depend on the config you set, the default are not sane.
[08:57:08] <abaumann> I hate this. calling scripts which as side-effect create git repos. There should be a 'setup' or 'init' script
[08:57:15] <abaumann> I also hate config files which do stuff.
[08:57:26] <abaumann> they should be just config files key=value pairs, nothing else
[08:58:01] <abaumann> things are really hard to reproduce, I wanted to create a second buildmaster for being able to test, that's not easy at all..
[08:58:25] <abaumann> so, I'm afraid changes happen all in the productive system..
[09:00:02] <abaumann> fatal: pathspec 'build-support/python-bootstrap/PKGBUILD' did not match any files
[09:00:08] <abaumann> heck, the file _is_ in packages32
[09:06:31] <abaumann> I don't get it: build_list_lock_file is a config variable being used, it's documented and compenet out in master.conf.example, but it's not set anywhere
[09:07:43] <abaumann> load-configuration:if [ -z "${build_list_lock_file}" ]; then
[09:07:43] <abaumann> load-configuration: build_list_lock_file="${work_dir}/build-list.lock"
[09:07:44] <abaumann> oh.
[09:07:53] <abaumann> well, yeah. readability,. ahem.
[09:13:32] <abaumann> ok, the buildmaster works by accident because the state repo is in ~ of the master and repo_paths__archlinux32 happens to exist in a common.conf
[09:47:09] <abaumann> I'm not convinced all repos are updated correctly.
[09:47:23] <abaumann> or in the right place
[09:54:58] <abaumann> gcc -o /build/linux-lts515/src/linux-5.15.120/tools/bpf/resolve_btfids/fixdep /build/linux-lts515/src/linux-5.15.120/tools/bpf/resolve_btfids/fixdep-in.o
[09:55:01] <abaumann> /build/linux-lts515/src/linux-5.15.120/scripts/bpf_doc.py --header \
[09:55:17] <abaumann> after years of perl in the kernel build process now we finally have python in the build process..
[09:55:30] <abaumann> ..luckilly I consider code injection systems like BPF optional. :->
[09:58:45] <T`aZ> wait till rust is becoming mandatory :(
[10:03:45] <abaumann> I hope I'm retired till then.. :-)
[10:04:42] <abaumann> mmh. blacklisted packages are checked on every get-package-updates and then deleted again?
[10:10:43] <abaumann> so, finally I can bootstrap python-bootstrap :-)
[10:11:13] <abaumann> most problems at the moment are due to python rebuilds missing directly or indirectly because packages with python bindings are not building.
[10:11:48] <abaumann> I split python-bootstrap per subarchitecture, I know that the bytecode is running anywhere, but I need separate bootstraps per subarchitecture
[10:18:07] <abaumann> /home/slave1/builder/bin/build-packages: line 582: build-support-staging-with-build-support-pentium4-build: command not found
[10:18:18] <abaumann> why is the repo part of the build command in devtools?
[10:18:40] <abaumann> I'll add some symlinks downstream
[10:33:02] -!- drathir_tor has joined #archlinux32
[11:08:02] <abaumann> python-bootstrap 909768d5fa0a1dcb5732f1f3e7da2210572e9e8e f4d0a6ee3cd5cc74137a022289720142d3968b23 build-support
[11:08:16] <abaumann> mmh. upstream exists, then gets (correctly) overloaded. ok.
[11:08:49] <abaumann> the idea with build-support is really, that we overload things or make complete new packages like rustXX-bin
[11:09:20] <abaumann> now comes the problem with certain build slaves which still have old devtools32
[11:09:29] <abaumann> but I can force packagebuilds to my build slaves
[12:07:59] <abaumann> -rw-r--r-- 1 http http 310 Jul 6 13:44 python-pep517-0.1-1.0-pentium4.pkg.tar.zst.sig
[12:08:02] <abaumann> -rw-r--r-- 1 http http 39922 Jul 6 13:44 python-pep517-0.1-1.0-pentium4.pkg.tar.zst
[12:08:05] <abaumann> -rw-r--r-- 1 http http 310 Jul 6 13:44 python-installer-0.1-1.0-pentium4.pkg.tar.zst.sig
[12:08:08] <abaumann> -rw-r--r-- 1 http http 249246 Jul 6 13:44 python-installer-0.1-1.0-pentium4.pkg.tar.zst
[12:08:11] <abaumann> -rw-r--r-- 1 http http 310 Jul 6 13:44 python-flit-core-0.1-1.0-pentium4.pkg.tar.zst.sig
[12:08:14] <abaumann> -rw-r--r-- 1 http http 112734 Jul 6 13:44 python-flit-core-0.1-1.0-pentium4.pkg.tar.zst
[12:08:17] <abaumann> -rw-r--r-- 1 http http 310 Jul 6 13:44 python-wheel-0.1-1.0-pentium4.pkg.tar.zst.sig
[12:08:20] <abaumann> -rw-r--r-- 1 http http 81091 Jul 6 13:44 python-wheel-0.1-1.0-pentium4.pkg.tar.zst
[12:08:23] <abaumann> -rw-r--r-- 1 http http 310 Jul 6 13:44 python-tomli-0.1-1.0-pentium4.pkg.tar.zst.sig
[12:08:26] <abaumann> -rw-r--r-- 1 http http 28336 Jul 6 13:44 python-tomli-0.1-1.0-pentium4.pkg.tar.zst
[12:08:29] <abaumann> -rw-r--r-- 1 http http 310 Jul 6 13:44 python-setuptools-0.1-1.0-pentium4.pkg.tar.zst.sig
[12:08:32] <abaumann> -rw-r--r-- 1 http http 1111518 Jul 6 13:44 python-setuptools-0.1-1.0-pentium4.pkg.tar.zst
[12:08:35] <abaumann> looking good, so we get bootstrapped python base packages per subarchitecture.. now I can force build all of python
[12:18:03] <abaumann> yep, I get tons of file conflicts when rebuilding packages..
[13:31:38] -!- drathir_tor has quit [Ping timeout: 240 seconds]
[14:41:07] -!- drathir_tor has joined #archlinux32
[14:45:01] <abaumann> python 0043321ab9aca35df480bcb5235cae19cffc7b98 95b5d58d741735391f1c129c232d614ceb6e5d64 core
[14:45:24] <abaumann> when I'm rescheduling python, 0043321ab9aca35df480bcb5235cae19cffc7b98 is not the revision in state/core-x86_64/python
[14:45:49] <abaumann> d67854fd3145180798de43fe42dcdf132ee0fee9 is the one in state
[14:46:05] <abaumann> I really start to wonder what we are building exactly
[14:47:28] <abaumann> get_upstream_package fails of course when git revisions are wrong
[14:50:43] <KitsuWhooa> Mmmhm
[14:59:21] <abaumann> In my brain core-x86_64/core-any and extra-x86_64/extra-any revisions in the state git repo should be used (as theyare stable), they must somehow correlate to the values in package_sources
[14:59:28] <abaumann> in the mysql database
[15:01:00] <abaumann> as soon as I get broken upstream package fetches in the cache, something is terribly wrong
[15:01:41] <abaumann> this upstream package cache was only necessary because we have to get the package descriptions via tor because of bandwith restrictions.
[15:07:32] <abaumann> yep: 0043321ab9aca35df480bcb5235cae19cffc7b98 is not a valid upstream revision for 'python'
[15:09:32] <abaumann> sorry, I'm lost, I don't know, what the buildmaster is doing ATM
[15:10:06] -!- abaumann has quit [Quit: leaving]
[15:30:36] <KitsuWhooa> :(
[18:51:31] -!- drathir_tor has quit [Remote host closed the connection]
[21:22:05] -!- drathir_tor has joined #archlinux32