The musl preprocessor debate

Today, I would like to discuss a project that I care very deeply about: the musl libc. One of the most controversial and long-standing debates in the musl community is that musl does not define a preprocessor macro.

What’s in a macro?

Simply put, preprocessor macros allow C code to build parts of itself conditionally. For example, the GNU libc defines the “__GLIBC__” macro. If your code needs to do something specific to function properly on systems using that library, it can conditionally build that code using “#ifdef __GLIBC__”.

The authors of musl have said that they will not add a preprocessor macro identifying the platform as musl because:

It’s a bug to assume a certain implementation has particular properties rather than testing.

Rich Felker, “Re: #define __MUSL__ in features.h”, 2013-03-29

I agree with this sentiment in theory, and in an idealised world this would hold up. However, I’d like to discuss why I think this may need to be reconsidered moving forward.

Sometimes you can’t test

One major reason this is an issue is that sometimes it is not possible to do what the authors consider the “correct” form of testing, which is compile-testing.

This practice requires you to build a small test program, determine whether it built properly, determine its runtime characteristics, and then use the results of that test to influence how your actual software is built. This is an alternative to using the conditional code with preprocessor macros.

However, there are many reasons you may not be able to successfully perform such testing. Cross compilation is a large gap here. In fact, many years ago when I was starting the Adélie project, this caused failures in the base image I was building.

The Bash shell could not perform any compile-time or run-time checks because it was being cross-compiled from a GNU libc system to a musl libc system. This caused it to use “fallback” code that worked improperly. If musl had defined a __MUSL__ macro, Bash would not have needed to assume it was running on a pre-POSIX system.

Similarly, the mailing list thread that made me feel strongly enough to write this article involves a header-only library. These types of libraries are meant to be “drop-in” and function without any changes to a developer’s build system. If header-only libraries start requiring you to use build-time tests, you lose the main reason to use them in the first place.

The author of this thread correctly points out that FreeBSD versions their API with a preprocessor macro. Any software that requires a certain API can simply ensure that __FreeBSD_version is defined as greater-or-equal than the versions that introduced that API.

The main reason that the musl project is fearful of this approach, at least to my observation, is that features or APIs (or indeed, bug fixes) can be backported to prior versions. I feel very strongly that this is not the responsibility of the libc.

If a distribution backports a feature, API, or patch to an older version of a library, it is that distribution’s responsibility to ensure that the software they build against it continues to function. When I backported an API from Qt 5.10 to 5.9 to ensure KDE continued building for Adélie, it was my responsibility as maintainer of those packages to keep them building properly. It certainly does not mean Qt should stop defining a preprocessor macro to determine the version being built against.

Additionally, some APIs are privileged. Determining whether these APIs work correctly using run-time testing can prevent CI/CD from working properly because the CI user does not have permission to use them.

A versioned macro like FreeBSD’s makes sense

I feel that the best way forward for musl is to define a macro like FreeBSD’s. It monotonically increases as APIs or features are added.

I agree that simple bug fixes, and even behavioural changes, probably should not be tracked with this macro. However, this would make it significantly easier to use new APIs as they are introduced.

It also makes builds more efficient. The cost of compile-time tests racks up quickly. On my POWER9 Talos workstation, typical ./configure runs take longer than the builds themselves. This is because fork+exec is still a slow path on POWER. It is similar on ARM, MIPS, and many other RISC architectures.

Macros like these don’t fully eliminate the need for ./configure, but they lessen the workload. Compile-time tests make sense for behaviour detection, but they do not make sense for API detection.

Daily-driving a Mac, one year later

It has been about a year since I published Really leaving the Linux desktop behind. This marks the first year I’ve used Mac OS as my primary computing environment since 2014. Now, I want to summarise my thoughts and feelings on using the Apple ecosystem as my primary platform – good and bad.

The Amazing

Universal Clipboard

Universal Clipboard has dramatically simplified my blog workflow. Typically, all of my articles are drafted and composed on my iPad Pro, as WordPress offers a great native app. This allows me to avoid using a browser. As I write this article, I am copying the links out of Safari on my Mac. They immediately show up in the pasteboard of the iPad.

KDE Connect does offer a Clipboard plugin, but it only supports plain text at this time. Apple’s implementation allows you to copy rich text, photos, files, and more.

Something else I would really like to note is that these features will work on High Sierra and later. It is transparent to the user no matter what version of the OS they are running. This somewhat alleviates the issue of newer OS versions having newer device requirements.

Safari Tab Groups

Safari 15 introduced the concept of Tab Groups, which is something I have been missing a lot since Firefox killed off extensions and replaced them with a severely limited alternative. Tab Groups simply allow you to categorise groups of tabs into cohesive sets. You can almost consider it a “focus window” where a logical set of tabs live.

The best part is that Safari’s Tab Groups sync between iCloud devices, which means I can use, add to, and manipulate tab groups on my tablet and phone as well. The replacement extension I used for Firefox had most of the features I enjoyed, but it doesn’t sync between devices using Firefox Sync, which meant I could really only browse the Web in this powerful way on my main desktop (then, my Talos II).

Having tab groups that sync between devices has allowed me to bring order to my previously chaotic Web browsing habits, allowing me to focus better and waste less time being distracted.

AirPlay

I love having the ability to AirPlay my screen directly to a TV. As far as I am aware, it is not possible on Linux to share your entire display to a smart TV without complicated command-line invocations that change regularly.

I used this to show my grandmother family photos while she was recovering from a health challenge. We use this monthly for budget planning in our household – just share Excel to the TV and we can see and discuss where the money is going this month.

There is no reason that this couldn’t be implemented on Linux, but off the top of my head I can think of a few challenges: the TV may have a different DPI than the computer screen (which has always been challenging for Linux windowing systems and toolkits), compressing the video using a libre codec while providing good picture quality and low bandwidth usage, and the general sorry state of wireless networking in Linux (which is due to the chipmakers, I know).

Apple Maps

Having a Maps app on my computer that I can use to view place details, satellite imagery, landmark information, and plan routes is a very powerful tool. I use this regularly to find places to shop local, and to plan weekend excursions to parks and attractions.

The closest thing I found on Linux was Marble. While I did enjoy the fact that Marble integrated so well with OSM, the views were always slightly grainy and off. Zoom and pan needed work and I could never understand the code well enough to contribute a fix.

You can still boot Linux on them

The Asahi Linux project has done an amazing job on building a boot loader for the M1 that should allow a whole host of alternative systems working. This includes not just Linux but also the BSDs, and perhaps even illumos when they bring up ARM64 support.

I remember when the M1 came out, everyone thought the firmware would be locked down and prevent non-Mac OS systems from running at all. It turns out that not only did this not happen, but you can actually sign your own kernels and have Trusted Boot using your own compiled Linux. This may end up making the M1 more libre-friendly than x86 systems.

Misc

  • Apps like Things really demonstrate the power of the Mac platform and what is on offer. You could probably make something as nice and integrated as Things on Linux, but for someone as busy as me, it is nice to use what is already there.
  • I feel much more in control of notifications on the Mac platform than I did on Linux with libnotify and Plasma. Notifications can be handled per-app, not just per-notification in the app itself. “Focus modes” (DnD) sync with my other devices like my phone and tablet. I can set repeating schedules (or one-offs) with profiles that allow some apps through but not others.
  • Older devices really are still supported. Even if you can’t boot Big Sur or Monterey on them, which is a big list if you are willing to play with a patching system, most of the niceties I’ve written about work back to High Sierra.

The drawbacks

The only real drawback that I’ve found in this year is that since the Mac isn’t a fully libre open-source system, I can’t fix the few bugs that I’ve run into.

I have not felt “trapped” or “helpless” or at all like I am living in a walled garden. Terminal is still there, unsigned apps can still be run with a simple context-click, and AppleScript (and now Shortcuts) is available to automate workflows.

I still believe that libre software ideals are correct and the goal of having a libre operating environment is a good one. However, I also believe that it was perhaps naive of me to believe that such a thing can truly exist in the way I hoped it could. The people who develop libre operating environments have different priorities.

And when you are spending your days using technology instead of making technology, the libre software ideals genuinely do become more of a theoretical than something in your face. This can be good, or bad, depending on your viewpoint.

At the end of the day, my goal in life is to make a difference, and also have a bit of fun. I want a system that is out of my way and lets me focus on that. For me, in 2022, that system is a Mac.

A final word on cost

Far too many people are priced out of the Apple ecosystem. I understand that part of the high cost of Apple products are to subsidise the R&D of making all these things work so well. However, they also have pretty high profit margins beyond their R&D expenditures.

I wish that Apple would lower their price, even a little, so that this amazing technology that works so well could be in the hands of more people.

Everyone on Earth deserves technology that is easy to use and lets them have a fun, happy life. That was my goal when I started the Adélie Linux project, and I only wish that more open source projects would do the same. Until then, I will continue doing my part to make the world a little bit better from the keyboard of a Mac.

Really leaving the Linux desktop behind

I’m excited to start a new chapter of my life tomorrow. I will be starting a new job working at an excellent company with excellent benefits and a comfortable wage.

It also has nothing to do with Linux distributions.

I have asked, and been granted, clearance to work on open source software during my off time. And I do plan on writing libre software. However, I really no longer believe in the dream of the Linux desktop that I set out to create in 2015. And I feel it might be beneficial for everyone if I describe why.

1. Stability.

My goal for the Linux desktop started with stability. Adélie is still dedicated to shipping only LTS releases, and I still feel that is useful. However, it has made more difficult because Qt has removed LTS from the open source community, plainly admitting they want us to be their beta testers and that paid commercial users are the only ones who deserve stability. This is obviously an antithesis to having a stable libre desktop environment.

Mozilla keeps pushing release cycles narrower together, in a desperate attempt to compete with evil G (more on this in the next section). This means that the yearly ESR releases, which Adélie depends on for some modicum of stability, are unfortunately being left behind by whiz bang web developers that don’t understand not everyone wants to run Fx Nightly.

I think that stability may be the point that is the easiest to argue it could still be fixed. You might be able to sway me on that. There are some upstreams finally dedicating themselves to better release engineering. And I’ve been happy to find that even most power users don’t care about running the bleeding edge as long as their computer works correctly.

My overall hope for the future: more libre devs understand the value of stable cycles and release engineering.

My fear for the future: everything is running off Git main forever.

2. Portability.

It’s been harder and harder for me to convince upstreams to support PowerPC, ARM, and other architectures. This even as Microsoft and Apple introduce flagship laptop models based on ARM, and Raptor continues to sell out of their Talos and Blackbird PPC systems.

A significant portion of issues with portability come from Google code. The Go runtime does not support many non-x86 architectures. And the ones it does, it does poorly. PPC support in Golang is 64-bit only and requires a Power8, which is equivalent to an x86 program requiring a Skylake or newer. You could probably get away with it for an end-user application, but no one would, or should, accept that in a systems programming language.

Additionally, the Chromium codebase is not amenable to porting to other architectures. Even when the Talos user community offered a PowerPC port, they rejected it outright. This is in addition to their close ties to glibc which means musl support requires thick patches with thousands and thousands of lines. They won’t accept patches for Skia or WebP for big endian support. They, in general, do not believe in the quality of portability as something desireable.

This would be fine and good since GCC Go works, and we do have Firefox, Otter (which can still use Qt WebKit), and Epiphany for browsers. However, increasingly, important software like KMail is depending on WebEngine, which is a Chromium embedded engine. This means KDE’s email client will not run on anything other than x86_64 and ARMv8, even though the mail client itself is portable.

This also has ramifications of user security and privacy. The Chromium engine regularly has large, high-risk security holes, which means even if you do have a downstream patch set to run on musl or PowerPC, you need to ensure you forward-port as they release. And their release models are insanely paced. They rewrite large portions of the engine with significant, distressing regularity. This makes it unsuitable for tracking in a desktop that requires stability and security, in addition to portability.

And with more and more Qt and KDE apps (IMO, mistakenly) depending on WebEngine, this means more and more other apps are unsuitable for tracking.

My overall hope for the future: more libre devs care about accepting patches for running on non-x86 architectures. The US breaks up Google and kills Chromium for violating antitrust and RICO laws.

My fear for the future: everything is Chrome in the future.

3. The graphics stack.

I’ve made no secret of the fact that my personal opinion is that it would still, even today, be easier to fix X11 than to make Wayland generally acceptable for widespread use. But, let’s put that aside for now. Let’s also put aside the fact that they don’t want to work on making it work on nvidia GPUs, which represent half of the GPU market.

At the behest of one of my friends, who shall remain nameless, I spent part of my December break trying to bring up Wayland on my PowerBook G4. This computer runs KDE Plasma 5.18 (the current LTS release) under X11 with no issues or frameskip. It has a Radeon 9600XT with hardware OpenGL 2.1 support.

It took days to bring up anything on it because wlroots was being excessively difficult with handling the r300 for some reason. Once that was solved, it turned out it was drawing colours wrong. Days of hacking at it revealed that there are likely some issues in Mesa causing this, and that this is likely why Qt Quick requires the Software backend on BE machines.

When I asked the Wayland community for a few pointers at what to look at, since Mesa is far outside of my typical purview of code (graphics code is still intimidating to me, even at 30), I was met with nothing but scorn and criticism.

In addition, I was still unable to find a Wayland compositor that supports framebuffers and/or software mode, which would have removed the need to fix Mesa yet. Framebuffer support would also allow it to run on computers that run LXQt fine, like my Pentium III and iBook G3, both of which having Rage 128 cards that don’t have hardware GL2. This was also met with scorn and criticism.

Why should I bother improving the Wayland ecosystem to support the hardware I care about if they actively work against me, then blame the fact that cards like the S3 Trio64 and Rage128 don’t have DRM2 drivers?

My overall hope for the future: either Wayland compositors supporting more varied kinds of hardware, or X11 being improved and obviating the need for Wayland.

My fear for the future: you need an RX 480 to use a GUI on Linux.

4. Usability.

This is more of an objective point than a subjective one, but the usability of desktop Linux seems to be eternally stuck just below that of other environments. ElementaryOS is closest to fixing this, but there is still much to be desired from my point of view before they’re ready for prime time.

In conclusion.

I still plan to run Linux – likely Adélie – on all servers I use. (My fallback would be Gentoo, even after all these years and disagreements, if you were wondering.)

However, I have been slowly migrating my daily personal life from my Adélie laptop to a Mac running Catalina. And, sad as it is to say, I’ve found myself happier and with more time to do what I want to do.

It is my genuine hope that maybe in a few years, if the Linux ecosystem seems to be learning any of these lessons, maybe I can come back to it and contribute in earnest once again. Until then, it’s system/kernel level work and hacking POSIX conformance in to musl for me. The Linux desktop has simply diverged too far from what I need.

The Retro Lab: Diggings around the fox den

I was cleaning out a desk and its cubbies in preparation for setting up the Retro Lab and I found some pretty interesting discs. I’ll be taking better photos, or possibly even making hi-res scans with the SCSI ScanJet, later. But I couldn’t keep these to myself.

A CD-R of pre-musl Alpine Linux, burned for my home lab’s Xen hypervisor ca 2012. This would have been my second Alpine deployment, after the Pentium II/300 that allowed me to contribute the initial Django port to aports.
InstallShield’s CDSource. This has demos and information about all InstallShield products. This may be worth sending to the Internet Archive.
Back of the InstallShield CDSource sleeve detailing what it has.
Sun Solutions CD: Volume 2, 2000. I told everyone I’ve been using Solaris forever! This has a lot of tools related to Solaris and Java development, and tries hard to sell you on Sun’s Developer Connection Program. There are actually two CDs. Also probably worth imaging and sending to IA.
Back of the Sun Solutions CD sleeve.

And now, for what is likely the single most important CD in my life. The CD that enabled me to learn about this little project called Linux.

Red Hat Linux 5.0, Codename Hurricane. Date: 11/10/97 (written 1/12/98)

There are many more historic relics in this pile, but I should go to sleep. Have a great weekend, everyone!