Fear and loathing in kernel building

After a long and somewhat unreasonable delay, I have returned to bringing the Adélie Linux kernel package up to date with the latest LTS release, which at the time of this writing is 6.6.58.

Presently, we use the 5.15 LTS branch. I am hoping to see us land the 6.6 branch so that we can have support for newer hardware, features, and devices. There is also hope that there will be significant DRM improvements, allowing a better desktop experience for everyone.

Unfortunately, when it came time to build the x86_64 package, it failed to build. The kernel now requires elfutils to build an x86_64 kernel, even with CPU security issue mitigations disabled – and we don’t want to disable them anyway.

The elfutils library, being part of the GNU project, heavily relies on APIs that are only available in the GNU libc. It is not possible to build elfutils on a musl system without multiple shim libraries, in addition to patching out other behaviour that cannot be stubbed:

I have always been somewhat mistrustful of including software in the critical path that is not maintained and audited. And building the kernel is the most critical path in a distribution.

For this reason, if we must include an argp implementation, I want to make sure it is the best possible implementation we can have.

“Choice”: slim to none

I found a number of libraries that implement the argp interface, but all of them present significant challenges:

  • libargp: Based on gnulib code. Last commit: 9 years ago. Does not accept issues on GitHub, and links to a Bitbucket repository that has been removed. 9 years ago, gnulib didn’t support musl at all. In addition, the lack of ability to contact upstream isn’t great.
  • argp-standalone (Niels Möller edition): Based on glibc, which is what we are trying to emulate anyway. Last release: 20 years ago, which is approaching legal drinking age in the US. Pass.
  • argp-standalone (Érico edition): Based on glibc again. Last commit: 2 years ago. Okay, reasonable. The issues and pull requests are piling up, though: the build system isn’t generated in the released tar files, the install target doesn’t work, a shared library isn’t supported, and more.
  • argp-standalone (org edition): Somewhat we are now three forks deep. Last commit: just three months ago! It uses Meson as a build system, it seems to care about portability… but it fails to support non-English locales. It also appears to have issues with building on GCC 14, possibly, which could present issues in the future. They have self-identified that they should make a new release in June 2023, which is great, but they didn’t actually do it.

Honestly, the last option in that list isn’t so bad. Translation support could always be added later. However, these packages need to be added to our very core critical path of early packages built for the whole system. For that reason, we need to be excessively picky about:

  • Quality of implementation — is this trustworthy enough to be at the very centre of our dependency graph?
  • Dependencies — the Meson build system is great, but that introduces either Python or Muon into the very early graph, before the kernel is even built, which means kernel headers aren’t available.
  • Infrequency of updates — realistically, since changing packages this deep in the graph necessitates rebuilds of everything, updates cannot be done frequently.

And it is for that reason I am annoyed at this situation. The kernel has introduced a build time dependency that, at least on musl libc systems, presents a lot of uncomfortable challenges.

Oh well, it could be worse. It could be the Rust compiler, which means making Rust, LLVM, and all of their dependencies part of the early graph, and meaning that Rust compiler updates have to pass through the Platform Group and cause full system rebuilds!

I’ll see myself out now.

Expanding the Retro Lab, and Putting It to Work

Over the past month, I have been blessed with being in the right place at the right time to acquire a significant amount of really cool computers (and other technology) for the Retro Lab.

Between the collection I already had and these new “hauls”, I now have a lot of computers. I was, ahem, encouraged to stop using the closets in my flat to store them and finally obtained a storage locker for the computers I’m not using. It’s close to home, so I can swap between what I want to work on virtually at will.

Now I am thinking about ways to track all of the machines I have. One idea I’ve had is to use FileMaker Pro for the Power Macintosh to track the Macs, and FoxPro to track the PCs. One of my best friends, Horst, suggested I could even use ODBC to potentially connect the two.

This led me to all sorts of ideas regarding ways to safely and securely run some server services on older systems and software. One of my acquisitions was a Tyan 440LX-based server board with dual Pentium II processors. I’m thinking this would be a fun computer to use for NT. I have a legitimate boxed copy of BackOffice Server 2.5 that would be perfect for it, even!

Connecting this system to the Internet, though, would present a challenge if I want to have any modicum of security – so I’ve thought it out. And this is my plan for an eventual “Retro Cloud”.

Being a cybersecurity professional, my first thought was to completely isolate it on the network. I can set up a VLAN on my primary router, and connect that VLAN to a dedicated secondary router. That secondary router would have total isolation from my present network, so the “Retro Cloud” would have its own subnet and no way to touch any other system. This makes it safer to have an outbound connection. I’ll be able to explore Gopherspace, download updates via FTP, and all that good stuff.

Next, I’m thinking that it would make a lot of sense to have updated, secure software to proxy inbound connections. Apache and Postfix can hand sanitised requests to IIS and Exchange without exposing their old, potentially vulnerable protocol handlers directly to the Internet.

And finally, as long as everything on the NT system is public knowledge anyway – don’t (re)use any important passwords on it, don’t have private data stored on it – the risk is minimal even if an attacker were able to gain access despite these protections.

I’m still in the planning stages with this project, so I would love to hear further comments. Has anyone else set up a retro server build and had success securing it? Are there other cool projects that I may not have even thought of yet? Share your comments with me below!