Really leaving the Linux desktop behind

I’m excited to start a new chapter of my life tomorrow. I will be starting a new job working at an excellent company with excellent benefits and a comfortable wage.

It also has nothing to do with Linux distributions.

I have asked, and been granted, clearance to work on open source software during my off time. And I do plan on writing libre software. However, I really no longer believe in the dream of the Linux desktop that I set out to create in 2015. And I feel it might be beneficial for everyone if I describe why.

1. Stability.

My goal for the Linux desktop started with stability. Adélie is still dedicated to shipping only LTS releases, and I still feel that is useful. However, it has made more difficult because Qt has removed LTS from the open source community, plainly admitting they want us to be their beta testers and that paid commercial users are the only ones who deserve stability. This is obviously an antithesis to having a stable libre desktop environment.

Mozilla keeps pushing release cycles narrower together, in a desperate attempt to compete with evil G (more on this in the next section). This means that the yearly ESR releases, which Adélie depends on for some modicum of stability, are unfortunately being left behind by whiz bang web developers that don’t understand not everyone wants to run Fx Nightly.

I think that stability may be the point that is the easiest to argue it could still be fixed. You might be able to sway me on that. There are some upstreams finally dedicating themselves to better release engineering. And I’ve been happy to find that even most power users don’t care about running the bleeding edge as long as their computer works correctly.

My overall hope for the future: more libre devs understand the value of stable cycles and release engineering.

My fear for the future: everything is running off Git main forever.

2. Portability.

It’s been harder and harder for me to convince upstreams to support PowerPC, ARM, and other architectures. This even as Microsoft and Apple introduce flagship laptop models based on ARM, and Raptor continues to sell out of their Talos and Blackbird PPC systems.

A significant portion of issues with portability come from Google code. The Go runtime does not support many non-x86 architectures. And the ones it does, it does poorly. PPC support in Golang is 64-bit only and requires a Power8, which is equivalent to an x86 program requiring a Skylake or newer. You could probably get away with it for an end-user application, but no one would, or should, accept that in a systems programming language.

Additionally, the Chromium codebase is not amenable to porting to other architectures. Even when the Talos user community offered a PowerPC port, they rejected it outright. This is in addition to their close ties to glibc which means musl support requires thick patches with thousands and thousands of lines. They won’t accept patches for Skia or WebP for big endian support. They, in general, do not believe in the quality of portability as something desireable.

This would be fine and good since GCC Go works, and we do have Firefox, Otter (which can still use Qt WebKit), and Epiphany for browsers. However, increasingly, important software like KMail is depending on WebEngine, which is a Chromium embedded engine. This means KDE’s email client will not run on anything other than x86_64 and ARMv8, even though the mail client itself is portable.

This also has ramifications of user security and privacy. The Chromium engine regularly has large, high-risk security holes, which means even if you do have a downstream patch set to run on musl or PowerPC, you need to ensure you forward-port as they release. And their release models are insanely paced. They rewrite large portions of the engine with significant, distressing regularity. This makes it unsuitable for tracking in a desktop that requires stability and security, in addition to portability.

And with more and more Qt and KDE apps (IMO, mistakenly) depending on WebEngine, this means more and more other apps are unsuitable for tracking.

My overall hope for the future: more libre devs care about accepting patches for running on non-x86 architectures. The US breaks up Google and kills Chromium for violating antitrust and RICO laws.

My fear for the future: everything is Chrome in the future.

3. The graphics stack.

I’ve made no secret of the fact that my personal opinion is that it would still, even today, be easier to fix X11 than to make Wayland generally acceptable for widespread use. But, let’s put that aside for now. Let’s also put aside the fact that they don’t want to work on making it work on nvidia GPUs, which represent half of the GPU market.

At the behest of one of my friends, who shall remain nameless, I spent part of my December break trying to bring up Wayland on my PowerBook G4. This computer runs KDE Plasma 5.18 (the current LTS release) under X11 with no issues or frameskip. It has a Radeon 9600XT with hardware OpenGL 2.1 support.

It took days to bring up anything on it because wlroots was being excessively difficult with handling the r300 for some reason. Once that was solved, it turned out it was drawing colours wrong. Days of hacking at it revealed that there are likely some issues in Mesa causing this, and that this is likely why Qt Quick requires the Software backend on BE machines.

When I asked the Wayland community for a few pointers at what to look at, since Mesa is far outside of my typical purview of code (graphics code is still intimidating to me, even at 30), I was met with nothing but scorn and criticism.

In addition, I was still unable to find a Wayland compositor that supports framebuffers and/or software mode, which would have removed the need to fix Mesa yet. Framebuffer support would also allow it to run on computers that run LXQt fine, like my Pentium III and iBook G3, both of which having Rage 128 cards that don’t have hardware GL2. This was also met with scorn and criticism.

Why should I bother improving the Wayland ecosystem to support the hardware I care about if they actively work against me, then blame the fact that cards like the S3 Trio64 and Rage128 don’t have DRM2 drivers?

My overall hope for the future: either Wayland compositors supporting more varied kinds of hardware, or X11 being improved and obviating the need for Wayland.

My fear for the future: you need an RX 480 to use a GUI on Linux.

4. Usability.

This is more of an objective point than a subjective one, but the usability of desktop Linux seems to be eternally stuck just below that of other environments. ElementaryOS is closest to fixing this, but there is still much to be desired from my point of view before they’re ready for prime time.

In conclusion.

I still plan to run Linux – likely Adélie – on all servers I use. (My fallback would be Gentoo, even after all these years and disagreements, if you were wondering.)

However, I have been slowly migrating my daily personal life from my Adélie laptop to a Mac running Catalina. And, sad as it is to say, I’ve found myself happier and with more time to do what I want to do.

It is my genuine hope that maybe in a few years, if the Linux ecosystem seems to be learning any of these lessons, maybe I can come back to it and contribute in earnest once again. Until then, it’s system/kernel level work and hacking POSIX conformance in to musl for me. The Linux desktop has simply diverged too far from what I need.

Mozilla finally disavows Discord

mhoye’s new blog post on the future of Mozilla community chat came out last week. He notes about Discord that “their active hostility towards interoperability and alternative clients has disqualified them as a community platform.”

I am very thankful that the Mozilla brass have realised this, as I pointed out in an earlier installment. Kudos to them that three of their four options are fully libre — I sincerely hope they choose one of those three, and keep the Mozilla community libre and open.

Speaking with authority

I’ve just spent the better part of three hours arguing on IRC about Let’s Encrypt clients. After speaking with two others, I realised that nobody who I spoke with before knew their facts were facts.

Different people all told me various incorrect information, such as:

  • No ACME client supports doing a manual DNS TXT record for verification bootstrapping until you have an httpd up. (acme.sh, dehydrated, and certbot all support this.)
  • LE needs IPv4 for the HTTP challenge. (It worked fine for me with an IPv6-only host. I’m not sure which it would prefer if it had the choice between v6 and v4, or if it’d use Happy Eyeballs and connect to whichever responded first.)
  • It isn’t possible to step through the process manually as a debugging aid; you have to rely on your ACME client’s debugging facilities. (https://gethttpsforfree.com/ helped a tonne.)
  • You have to be listening for the HTTP challenge on port 80. (The TLS-ALPN-01 challenge type exists which will only ever use port 443 for the challenge.)
  • Critique of how I isolated each service on a separate VM so that they would be more secure, saying it was “over-convoluted”.

All of these people spoke with an air of authority. They sounded like they genuinely knew what they were talking about, and were trying to inform me of the limitations of ACME clients / Let’s Encrypt. Nobody actually knew the answer, but they thought they were right because it fit their experience.

When I speak to people about technology, whether in real life, on IRC, or on a mailing list, I always try to make the limitations of my knowledge clear. Many is the time I have said “I’m not sure if you can do that”, or “I don’t know if X supports Y“. And sure, on occasion I will say “I have never done Y and last I knew X couldn’t do that”. Note, however, that all of these are presented as statements from my hive of knowledge, and not presented as plain facts. The art of communication seems to be lost on far too many in the technology field.

There is no shame in not knowing the answer to something. It is certainly more helpful to say “I’m not sure you can do that”, instead of “you can’t do that”. I almost gave up on Let’s Encrypt and wrote another article on how useless it is, because I was told by people who used Let’s Encrypt that it had all of these limitations that made it seem useless, arbitrary, and ridiculous to me. (Thanks to Rich Felker of musl, and Freeyorp, for setting the record straight.)

Maybe we need a new term for this. “Organic FUD”, since it comes from the community itself? At any rate, I hope that in the future, more people note the limitations of their knowledge up-front rather than sounding authoritative about a subject they know little about.

GitHub and IPv6, three years later

Three years ago, I wrote Going IPv6 native without IPv4, which noted all the services I couldn’t access over IPv6. After all this time, there is some good news, and bad news.

First, the good news: BitBucket, Savannah, and Launchpad all support IPv6 now!

Now, the bad news: GitHub still does not. This has actually prevented me from setting up a trial run of acme.sh on a server. The server I was going to test LE on is only connected to the public Internet via IPv6. Yes, I was actually trying to see if Let’s Encrypt has gotten any better, and I was prevented from doing it because GitHub does not support IPv6.

Authors of ACME clients, especially ones that are only available via GitHub: find a mirror that supports IPv6! At this point, now I’m going to have to set up acme.sh on my workstation, and then scp the certificates over to the server every 60 days. Thanks GitHub.