What’s the deal with Cisco devices in `file` output, anyway?

If you work on PowerPC systems of some kind – or maybe you work on car MCUs that use the NEC V800 CPU – you may have run across some strange output when you run the file command on any binary:

/usr/bin/file: ELF 32-bit MSB pie executable, PowerPC or cisco 4500, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-powerpc.so.1, stripped

Of course it’s a PowerPC binary, but why the mention of “cisco 4500” (or Cisco 7500s for 64-bit PowerPC binaries, or Cisco 12000s for NEC V800s)? The reason behind this is a fascinating insight into the world of proprietary computing architectures and the somewhat inventive way Cisco tried to lock down some of their older systems.

A brief primer on ELF

ELF, which stands for Extensible Linking Format or Executable and Linkable Format and is not a Will Ferrell character, is a file format for executable files and shared libraries (among others).

In layman’s terms, ELF specifies things like what processor the executable runs on, the ABI that it uses, the endianness and word size (32-bit or 64-bit, for example) that it uses, and so on.

One of the fields in an ELF file is the e_machine field, which specifies the type of machine the file is designed to run on. 0x02 is SPARC, 0x03 is the Intel x86, 0x14 is 32-bit PowerPC, 0x15 is 64-bit PowerPC, and so on.

This is the identifier that allows your OS to tell you “Exec format error” (or similar) when you run an executable for a CPU other than the one you are currently using. As a side note, it is also this field that allows qemu-user binfmt to work, if you are curious.

Cisco’s use of e_machine

The boot loader for Cisco IOS machines, also known as ROMMON, will refuse to load firmware for a different router model than the system. For example, on a Cisco 2911, you may see:

loadprog: error - Invalid image for platform
e_machine = 30, cpu_type = 194

ROMMON uses e_machine as a sort of “model number”. The Cisco 4500 uses cpu_type 20 or 0x14, which happens to also be the ELF e_machine for PowerPC.

The “magic” library that the file command uses to determine the machine type of ELF binaries only knows a few models of Cisco. I haven’t been able to determine their criteria for inclusion, or why some are present and some aren’t.

References

The ROMMON error was gleaned from an OpenWrt forum post; I don’t have hardware to show this error myself.

More information about how older Cisco devices use ELF can be found on the LinuxMIPS wiki.

This question was originally asked by some curious people on the #talos-workstation IRC channel on Libera.Chat. I knew the basics of Cisco’s ELF-scapades, but they were the ones who inspired me to make this write-up and learn a bit more.

My feelings about the Queen (are complicated)

Something that many don’t know about me is that I’m Welsh (despite living in the US). Being Welsh gives me a very interesting relationship with Britain and the monarchy.

The United Kingdom provides us with a lot of good things, but Britain has also traditionally treated us pretty poorly at various times and under various reigns.

Still, it has made me quite upset at a visceral level to see how much the Internet and Twitterverse appears to hate the Queen and her family personally. A lot of what I see revolves around either colonialism or misunderstandings perpetuated by the media rags of the day.

While I definitely agree the Queen and the royals in general should have done more to give reparations to those who suffered under British colonial rule, I don’t agree she should shoulder all or even most of the blame.

Under the reign of her father and herself, many of the former colonies became independent republics. And it’s not like the Tories in power for the majority of her reign would have approved appropriate reparations anyway. I do wish they would have done more for Africa, and hope to see the new King doing work on that.

While the royals have done some pretty terrible things in their time, they’ve also done a lot of good. They are all big proponents of helping the climate, and the younger royals especially take after Lady Di in wanting to help the impoverished.

Speaking of Diana, let’s not forget that in her capacity as a royal, she helped to destigmatise HIV/AIDS at a time when many others in high places were happy to let those suffering from the disease rot.

Could they do more? Absolutely. Are they as flawless or squeaky clean as they’d like you to believe? Not even close.

But I highly disagree with the level of vilification happening online in the wake of the Queen’s death. I mourn her and the legacy of good things that she has done, while still acknowledging she was a flawed being and there were things she should have done that she did not.

Wherefore art thou, USB-C hubs?

I’ve been looking for weeks at various stores around Tulsa, and online, for USB-C hubs. I already have a USB-C hub that has ports like Ethernet, HDMI, and USB-A. What I am looking for is a hub that has many USB-C ports.

As my Lightning cables age out, and I replace more equipment with devices that have only USB-C, more of my devices are connected this way.

My M1 MacBook Pro has two USB-C ports, but I have:

  • A USB-C SSD with my photo library.
  • My iPhone 12 with a Lightning to USB-C cable (all of my Lightning to USB-As are finally worn out).
  • My iPad mini which is USB-C to USB-C.
  • The aforementioned hub for connecting an external display.
  • Sometimes an optical drive, which yes, also uses USB-C.
  • The charging cable, because all of these devices pull a lot of power.

So, as a ballpark estimate, I need about six USB-C ports here. I really do not want to have to use a bunch of C-to-A adaptors, especially since some of my devices seem to slow down when using them. Has anyone seen anything like that out there? I drastically prefer to shop local, but at this point I would even consider buying from Amazon.

Or to put it in the words of one of my favourite bloggers: Dear lazyweb, where can I buy USB-C hubs?

The musl preprocessor debate

Today, I would like to discuss a project that I care very deeply about: the musl libc. One of the most controversial and long-standing debates in the musl community is that musl does not define a preprocessor macro.

What’s in a macro?

Simply put, preprocessor macros allow C code to build parts of itself conditionally. For example, the GNU libc defines the “__GLIBC__” macro. If your code needs to do something specific to function properly on systems using that library, it can conditionally build that code using “#ifdef __GLIBC__”.

The authors of musl have said that they will not add a preprocessor macro identifying the platform as musl because:

It’s a bug to assume a certain implementation has particular properties rather than testing.

Rich Felker, “Re: #define __MUSL__ in features.h”, 2013-03-29

I agree with this sentiment in theory, and in an idealised world this would hold up. However, I’d like to discuss why I think this may need to be reconsidered moving forward.

Sometimes you can’t test

One major reason this is an issue is that sometimes it is not possible to do what the authors consider the “correct” form of testing, which is compile-testing.

This practice requires you to build a small test program, determine whether it built properly, determine its runtime characteristics, and then use the results of that test to influence how your actual software is built. This is an alternative to using the conditional code with preprocessor macros.

However, there are many reasons you may not be able to successfully perform such testing. Cross compilation is a large gap here. In fact, many years ago when I was starting the Adélie project, this caused failures in the base image I was building.

The Bash shell could not perform any compile-time or run-time checks because it was being cross-compiled from a GNU libc system to a musl libc system. This caused it to use “fallback” code that worked improperly. If musl had defined a __MUSL__ macro, Bash would not have needed to assume it was running on a pre-POSIX system.

Similarly, the mailing list thread that made me feel strongly enough to write this article involves a header-only library. These types of libraries are meant to be “drop-in” and function without any changes to a developer’s build system. If header-only libraries start requiring you to use build-time tests, you lose the main reason to use them in the first place.

The author of this thread correctly points out that FreeBSD versions their API with a preprocessor macro. Any software that requires a certain API can simply ensure that __FreeBSD_version is defined as greater-or-equal than the versions that introduced that API.

The main reason that the musl project is fearful of this approach, at least to my observation, is that features or APIs (or indeed, bug fixes) can be backported to prior versions. I feel very strongly that this is not the responsibility of the libc.

If a distribution backports a feature, API, or patch to an older version of a library, it is that distribution’s responsibility to ensure that the software they build against it continues to function. When I backported an API from Qt 5.10 to 5.9 to ensure KDE continued building for Adélie, it was my responsibility as maintainer of those packages to keep them building properly. It certainly does not mean Qt should stop defining a preprocessor macro to determine the version being built against.

Additionally, some APIs are privileged. Determining whether these APIs work correctly using run-time testing can prevent CI/CD from working properly because the CI user does not have permission to use them.

A versioned macro like FreeBSD’s makes sense

I feel that the best way forward for musl is to define a macro like FreeBSD’s. It monotonically increases as APIs or features are added.

I agree that simple bug fixes, and even behavioural changes, probably should not be tracked with this macro. However, this would make it significantly easier to use new APIs as they are introduced.

It also makes builds more efficient. The cost of compile-time tests racks up quickly. On my POWER9 Talos workstation, typical ./configure runs take longer than the builds themselves. This is because fork+exec is still a slow path on POWER. It is similar on ARM, MIPS, and many other RISC architectures.

Macros like these don’t fully eliminate the need for ./configure, but they lessen the workload. Compile-time tests make sense for behaviour detection, but they do not make sense for API detection.