Looking forward to 2023

(Note: This draft was being written when that Monday Night Football incident happened, so it was shelved for a bit.)

As this is my last day of holiday break, I thought I’d reflect a bit on what makes me the most excited for the coming year. Obviously, none of us know what the future holds, but these are some of my hopes for 2023:

Social stuff

It looks like Twitter might survive after all, but the fragmentation and millions of people going to the Fediverse intrigues me. I am very curious to see where the Fediverse goes now that it has so much more interest. I am hoping to see people like journalists and meteorologists start using it in earnest, which were some of my favourite follows on Twitter. It would be great to see the platform grow to new interests, since the majority of people there lean towards being in tech.

While their privacy policies and business practices still disturb me, this year will likely be the year I rekindle my Facebook account. There are still family members and friends of mine that use it, and some pretty nifty retrocomputing groups are on it as well. If you can’t beat ‘em, join ‘em, I guess. Any content that I post on Facebook would be mirrored to better platforms, so it wouldn’t be anything special for those of you who want to continue to stay away. I just don’t want to miss out on those connections that I could have just because of my aversion to late-stage advertisement capitalism.

Apple ecosystem

My iPad Pro is going to be seeing more usage this year as Stage Manager is finally available, bringing multiple app/window support. This is something that I’ve personally felt has kept the iPad from living up to its full potential, and something I remember seeing being done in the jailbreak scene for years, so I’m quite happy to see Apple finally putting it in official system software.

While I know rumours abound and there is no reason to think it would be released this year, I’m eternally looking forward to a wearable – like, say, an Apple Watch – that can also function as a glucometer. As someone with type 1 diabetes, it’d be a real boon to be able to have enough a rough estimate of what my blood glucose level is without having to wear a separate sensor.

It would be very cool, though unlikely, to see a MacBook Pro with a Dynamic Island like the iPhone 14 Pro.

Retrocomputing

I’ve received a lot of goodies, hardware and software, over autumn and winter. I can’t wait to put them to good use in the Retro Lab. I’m hoping to write a number of new articles in my Retro Lab series.

There are a number of software development projects I’d like to tinker with in the retrocomputng circle. I’m keeping details vague for now, as I don’t want to make any promises, but my focus as always will be on making classic Macs and Windows NT useful in the modern era.

Linux and libre software

I’ve been following the SPDX project’s continual drive to make automated tooling around discovering and managing licenses of software packages. It would be very cool to integrate some of these tools into package managers like APK.

The Qt project is still not in my good graces after their decision to make LTS releases commercial-only. This only became stronger when it was announced qmlsc, the QML compiler that would make QML apps into high-performant, non-interpreted C++ apps, is also only available for commercial customers of Qt. Maybe the KDE team will support a libre Qt 6 LTS branch in the same way they support 5.15?

Speaking of LTS branches of things with major versions of 6, the Linux kernel 2023 LTS edition should be pretty exciting. Linux 6.1 and 6.2 bring a lot more support of AArch64 boards, including the Apple M1 and Qualcomm 8cx Gen 3. When the Linux 6 LTS drops, it will be very exciting to dual-boot mainline Linux on my MacBook Pro M1.

I am personally hoping to have some time to devote to “traditionally opposite” endian projects. Specifically, I want to see if I can bootstrap an aarch64_be environment on my Pine A64, and similarly bootstrap a ppc64el environment. There are probably going to be a lot of false assumptions in code regarding aarch64_be.

Adélie continues to improve regularly, and hopefully this will finally be the year of the release of Adélie Linux 1.0. Yes, I am taking on a somewhat more active role again, and no, I do not want to comment 😉

Lastly, it will be exciting to see where the GCC Rust front end goes. Hopefully this will lead to significant improvements in Rust’s bootstrap story, which will help make it more useful and approachable by people who cannot use, or do not want to trust, the Mozilla-provided binaries.

Personal

I want to take photography seriously again. Photography can tell a story, document history, and transport others to a new perspective. I really enjoy taking these kinds of photos and hope to have some great snapshots to share throughout the year.

In addition to the retrocomputing projects, there are a few others non-retro-related software development and library improvement projects that I hope to spend some time on this year. Some of them are Wayland on Power, Zig on big-endian Power, and adding better compression support to APK Tools.

In conclusion

That is an overview of what I hope to devote my time to in 2023. What do you think? Are there cool developments that I should be looking at that I missed? Are you excited about some of these too? Feel free to discuss in the comments!

What’s the deal with Cisco devices in `file` output, anyway?

If you work on PowerPC systems of some kind – or maybe you work on car MCUs that use the NEC V800 CPU – you may have run across some strange output when you run the file command on any binary:

/usr/bin/file: ELF 32-bit MSB pie executable, PowerPC or cisco 4500, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-powerpc.so.1, stripped

Of course it’s a PowerPC binary, but why the mention of “cisco 4500” (or Cisco 7500s for 64-bit PowerPC binaries, or Cisco 12000s for NEC V800s)? The reason behind this is a fascinating insight into the world of proprietary computing architectures and the somewhat inventive way Cisco tried to lock down some of their older systems.

A brief primer on ELF

ELF, which stands for Extensible Linking Format or Executable and Linkable Format and is not a Will Ferrell character, is a file format for executable files and shared libraries (among others).

In layman’s terms, ELF specifies things like what processor the executable runs on, the ABI that it uses, the endianness and word size (32-bit or 64-bit, for example) that it uses, and so on.

One of the fields in an ELF file is the e_machine field, which specifies the type of machine the file is designed to run on. 0x02 is SPARC, 0x03 is the Intel x86, 0x14 is 32-bit PowerPC, 0x15 is 64-bit PowerPC, and so on.

This is the identifier that allows your OS to tell you “Exec format error” (or similar) when you run an executable for a CPU other than the one you are currently using. As a side note, it is also this field that allows qemu-user binfmt to work, if you are curious.

Cisco’s use of e_machine

The boot loader for Cisco IOS machines, also known as ROMMON, will refuse to load firmware for a different router model than the system. For example, on a Cisco 2911, you may see:

loadprog: error - Invalid image for platform
e_machine = 30, cpu_type = 194

ROMMON uses e_machine as a sort of “model number”. The Cisco 4500 uses cpu_type 20 or 0x14, which happens to also be the ELF e_machine for PowerPC.

The “magic” library that the file command uses to determine the machine type of ELF binaries only knows a few models of Cisco. I haven’t been able to determine their criteria for inclusion, or why some are present and some aren’t.

References

The ROMMON error was gleaned from an OpenWrt forum post; I don’t have hardware to show this error myself.

More information about how older Cisco devices use ELF can be found on the LinuxMIPS wiki.

This question was originally asked by some curious people on the #talos-workstation IRC channel on Libera.Chat. I knew the basics of Cisco’s ELF-scapades, but they were the ones who inspired me to make this write-up and learn a bit more.

The musl preprocessor debate

Today, I would like to discuss a project that I care very deeply about: the musl libc. One of the most controversial and long-standing debates in the musl community is that musl does not define a preprocessor macro.

What’s in a macro?

Simply put, preprocessor macros allow C code to build parts of itself conditionally. For example, the GNU libc defines the “__GLIBC__” macro. If your code needs to do something specific to function properly on systems using that library, it can conditionally build that code using “#ifdef __GLIBC__”.

The authors of musl have said that they will not add a preprocessor macro identifying the platform as musl because:

It’s a bug to assume a certain implementation has particular properties rather than testing.

Rich Felker, “Re: #define __MUSL__ in features.h”, 2013-03-29

I agree with this sentiment in theory, and in an idealised world this would hold up. However, I’d like to discuss why I think this may need to be reconsidered moving forward.

Sometimes you can’t test

One major reason this is an issue is that sometimes it is not possible to do what the authors consider the “correct” form of testing, which is compile-testing.

This practice requires you to build a small test program, determine whether it built properly, determine its runtime characteristics, and then use the results of that test to influence how your actual software is built. This is an alternative to using the conditional code with preprocessor macros.

However, there are many reasons you may not be able to successfully perform such testing. Cross compilation is a large gap here. In fact, many years ago when I was starting the Adélie project, this caused failures in the base image I was building.

The Bash shell could not perform any compile-time or run-time checks because it was being cross-compiled from a GNU libc system to a musl libc system. This caused it to use “fallback” code that worked improperly. If musl had defined a __MUSL__ macro, Bash would not have needed to assume it was running on a pre-POSIX system.

Similarly, the mailing list thread that made me feel strongly enough to write this article involves a header-only library. These types of libraries are meant to be “drop-in” and function without any changes to a developer’s build system. If header-only libraries start requiring you to use build-time tests, you lose the main reason to use them in the first place.

The author of this thread correctly points out that FreeBSD versions their API with a preprocessor macro. Any software that requires a certain API can simply ensure that __FreeBSD_version is defined as greater-or-equal than the versions that introduced that API.

The main reason that the musl project is fearful of this approach, at least to my observation, is that features or APIs (or indeed, bug fixes) can be backported to prior versions. I feel very strongly that this is not the responsibility of the libc.

If a distribution backports a feature, API, or patch to an older version of a library, it is that distribution’s responsibility to ensure that the software they build against it continues to function. When I backported an API from Qt 5.10 to 5.9 to ensure KDE continued building for Adélie, it was my responsibility as maintainer of those packages to keep them building properly. It certainly does not mean Qt should stop defining a preprocessor macro to determine the version being built against.

Additionally, some APIs are privileged. Determining whether these APIs work correctly using run-time testing can prevent CI/CD from working properly because the CI user does not have permission to use them.

A versioned macro like FreeBSD’s makes sense

I feel that the best way forward for musl is to define a macro like FreeBSD’s. It monotonically increases as APIs or features are added.

I agree that simple bug fixes, and even behavioural changes, probably should not be tracked with this macro. However, this would make it significantly easier to use new APIs as they are introduced.

It also makes builds more efficient. The cost of compile-time tests racks up quickly. On my POWER9 Talos workstation, typical ./configure runs take longer than the builds themselves. This is because fork+exec is still a slow path on POWER. It is similar on ARM, MIPS, and many other RISC architectures.

Macros like these don’t fully eliminate the need for ./configure, but they lessen the workload. Compile-time tests make sense for behaviour detection, but they do not make sense for API detection.

Daily-driving a Mac, one year later

It has been about a year since I published Really leaving the Linux desktop behind. This marks the first year I’ve used Mac OS as my primary computing environment since 2014. Now, I want to summarise my thoughts and feelings on using the Apple ecosystem as my primary platform – good and bad.

The Amazing

Universal Clipboard

Universal Clipboard has dramatically simplified my blog workflow. Typically, all of my articles are drafted and composed on my iPad Pro, as WordPress offers a great native app. This allows me to avoid using a browser. As I write this article, I am copying the links out of Safari on my Mac. They immediately show up in the pasteboard of the iPad.

KDE Connect does offer a Clipboard plugin, but it only supports plain text at this time. Apple’s implementation allows you to copy rich text, photos, files, and more.

Something else I would really like to note is that these features will work on High Sierra and later. It is transparent to the user no matter what version of the OS they are running. This somewhat alleviates the issue of newer OS versions having newer device requirements.

Safari Tab Groups

Safari 15 introduced the concept of Tab Groups, which is something I have been missing a lot since Firefox killed off extensions and replaced them with a severely limited alternative. Tab Groups simply allow you to categorise groups of tabs into cohesive sets. You can almost consider it a “focus window” where a logical set of tabs live.

The best part is that Safari’s Tab Groups sync between iCloud devices, which means I can use, add to, and manipulate tab groups on my tablet and phone as well. The replacement extension I used for Firefox had most of the features I enjoyed, but it doesn’t sync between devices using Firefox Sync, which meant I could really only browse the Web in this powerful way on my main desktop (then, my Talos II).

Having tab groups that sync between devices has allowed me to bring order to my previously chaotic Web browsing habits, allowing me to focus better and waste less time being distracted.

AirPlay

I love having the ability to AirPlay my screen directly to a TV. As far as I am aware, it is not possible on Linux to share your entire display to a smart TV without complicated command-line invocations that change regularly.

I used this to show my grandmother family photos while she was recovering from a health challenge. We use this monthly for budget planning in our household – just share Excel to the TV and we can see and discuss where the money is going this month.

There is no reason that this couldn’t be implemented on Linux, but off the top of my head I can think of a few challenges: the TV may have a different DPI than the computer screen (which has always been challenging for Linux windowing systems and toolkits), compressing the video using a libre codec while providing good picture quality and low bandwidth usage, and the general sorry state of wireless networking in Linux (which is due to the chipmakers, I know).

Apple Maps

Having a Maps app on my computer that I can use to view place details, satellite imagery, landmark information, and plan routes is a very powerful tool. I use this regularly to find places to shop local, and to plan weekend excursions to parks and attractions.

The closest thing I found on Linux was Marble. While I did enjoy the fact that Marble integrated so well with OSM, the views were always slightly grainy and off. Zoom and pan needed work and I could never understand the code well enough to contribute a fix.

You can still boot Linux on them

The Asahi Linux project has done an amazing job on building a boot loader for the M1 that should allow a whole host of alternative systems working. This includes not just Linux but also the BSDs, and perhaps even illumos when they bring up ARM64 support.

I remember when the M1 came out, everyone thought the firmware would be locked down and prevent non-Mac OS systems from running at all. It turns out that not only did this not happen, but you can actually sign your own kernels and have Trusted Boot using your own compiled Linux. This may end up making the M1 more libre-friendly than x86 systems.

Misc

  • Apps like Things really demonstrate the power of the Mac platform and what is on offer. You could probably make something as nice and integrated as Things on Linux, but for someone as busy as me, it is nice to use what is already there.
  • I feel much more in control of notifications on the Mac platform than I did on Linux with libnotify and Plasma. Notifications can be handled per-app, not just per-notification in the app itself. “Focus modes” (DnD) sync with my other devices like my phone and tablet. I can set repeating schedules (or one-offs) with profiles that allow some apps through but not others.
  • Older devices really are still supported. Even if you can’t boot Big Sur or Monterey on them, which is a big list if you are willing to play with a patching system, most of the niceties I’ve written about work back to High Sierra.

The drawbacks

The only real drawback that I’ve found in this year is that since the Mac isn’t a fully libre open-source system, I can’t fix the few bugs that I’ve run into.

I have not felt “trapped” or “helpless” or at all like I am living in a walled garden. Terminal is still there, unsigned apps can still be run with a simple context-click, and AppleScript (and now Shortcuts) is available to automate workflows.

I still believe that libre software ideals are correct and the goal of having a libre operating environment is a good one. However, I also believe that it was perhaps naive of me to believe that such a thing can truly exist in the way I hoped it could. The people who develop libre operating environments have different priorities.

And when you are spending your days using technology instead of making technology, the libre software ideals genuinely do become more of a theoretical than something in your face. This can be good, or bad, depending on your viewpoint.

At the end of the day, my goal in life is to make a difference, and also have a bit of fun. I want a system that is out of my way and lets me focus on that. For me, in 2022, that system is a Mac.

A final word on cost

Far too many people are priced out of the Apple ecosystem. I understand that part of the high cost of Apple products are to subsidise the R&D of making all these things work so well. However, they also have pretty high profit margins beyond their R&D expenditures.

I wish that Apple would lower their price, even a little, so that this amazing technology that works so well could be in the hands of more people.

Everyone on Earth deserves technology that is easy to use and lets them have a fun, happy life. That was my goal when I started the Adélie Linux project, and I only wish that more open source projects would do the same. Until then, I will continue doing my part to make the world a little bit better from the keyboard of a Mac.