Notes about the iBook G3 Clamshell

I’ve just repaired the hinge on my Indigo Clamshell. While I was in there, I also replaced the aging hard disk with a SD card adaptor. I wanted to write down a few notes about the process, both for posterity and so that others can benefit from my experience.

The standoffs for the hard disk caddy are brittle. I slightly over-tightened one and it snapped right off. Luckily, it snapped in a way that it would still stand solidly and hold the grounding wire of the charging board. When the Service Source manual says do not overtighten, it means it – as soon as there is the slightest resistance, stop: it’s tight.

I burned a copy of the iBook Software Restore CD from the fabulous archivists at the Garden, so that I could put the original software back on the SD card since it was empty. I used Verbatim CD-R 52x media and burned with an LG SP80NB80 on my Mac Studio.

The disc was readable by the iBook’s optical drive, but only barely; it took five minutes to show the Desktop. I’m not sure if it was the speed at which it was burned, the Verbatim media simply not agreeing with the iBook, or something about the power of the laser in the LG.

I regularly received “Some applications could not be quit.” when attempting to use Erase, and received “Restoring the software configuration iBook HD.img to volume Macintosh HD failed.” when attempting to use Restore.

I used my Power Mac G5 to read the CD and copy it to a USB key. Specifically, I used:

sudo dd if=/dev/disk3s1 of=/dev/disk2 bs=1048576

A mere 15 minutes later, I had a functional USB version of the iBook Software Restore. I then used a copy of Puma (Mac OS X 10.1.4) to install on the same partition, allowing me to dual-boot the system in both 9 and X. I have a second partition I plan to use to install Jaguar or Panther. I haven’t decided which one yet.

I’ll close with a photo of the iBook being a happy Puma. Until next time, be well!

My Indigo iBook G3 Clamshell, showing the introduction video from Mac OS X “Puma” 10.1.
Happy as a clam(shell)! 😁

systemd through the eyes of a musl distribution maintainer

Welcome back to FOSS Fridays! This week, I’m covering a real pickle.

I’m acutely aware of the flames this blog post will inspire, but I feel it is important to write nevertheless. I volunteer my time towards helping to maintain a Linux distribution based on the musl libc, and I am writing an article about systemd. This is my take and my take alone. It is not the opinion of the project – or, as far as I am aware, any of the other volunteers working on it.

systemd, as a service manager, is not actually a bad piece of software by itself. The fact it can act as both a service manager and an inetd(8) replacement is really cool. The unit file format is very nice and expressive. Defining mechanism and leaving policy to the administrator is a good design.

Of course, nothing exists in a vacuum. I don’t like the encouragement to link daemons to libsystemd for better integration – all of the useful integrations can be done with more portable measures. And I really don’t like the fact they consider glibc to be “the Linux API” when musl, Bionic, and other libcs exist.

I’d like to dive into detail on the good and the bad of systemd, as seen through my eyes as all of: end user, administrator, and developer.

Service management: Good

Unit files are easy to write by hand, and also easy to generate in an automated fashion. You can write a basic service in a few lines, and grow into using the other features as needs arise – or you can write a very detailed file, dozens of lines long, making it exact and precise.

Parallel service starting and socket activation are first-class citizens as well, which is something very important to making boot-up faster and more reliable.

The best part about it is the concept that this configuration exactly describes the way the system should appear and exist while it is running. This is similar to how network device standards work – see NETCONF and its stepchild RESTCONF. You define how you want the device to look when it is running, apply the configuration, and eventually the device becomes consistent to that configuration.

This is a far cry from OpenRC or SysV init scripts, which focus almost exclusively on spawning processes. It’s a powerful paradigm shift, and one I wholeheartedly welcome and endorse.

Additionally, the use of cgroups per managed unit means that process tracking is always available, without messy pid files or requiring daemons to never fork. This is another very useful feature that not only helps with overall system control, but also helps debugging and even security auditing. When cgroups are used in this way, you always know which unit spawned any process on a fully-managed system.

Lack of competition: Not good

There is no reason that another service manager couldn’t exist with all of these features. In fact, I hope that there will be competition to systemd that is taken seriously by the community. Having a single package being all things for all use cases leads to significant problems. Changes in systemd will necessarily affect every single user – this may seem obvious, but that means it is more difficult for it to evolve. Evolution of the system may, and in some cases already has, break a wide number of use cases and machines.

Additionally, without competition there is no external pressure nudging it towards ideas and concepts that perhaps the maintainers aren’t sure about. GCC and Clang learn from each other’s successes and failures and use that knowledge to make each other better. There is no package doing that with systemd right now. Innovation is stifled where choice is removed.

Misnaming glibc as “the Linux API”: Bad

I am also unhappy about systemd’s lack of musl libc support. That is probably a blessing for me, because it’s an easy reason to avoid trying to ship it in Adélie. While I have just spent five paragraphs noting how great systemd is at service management, it is really bad at a lot of other things. This is where most articles go off the deep end, but I want to provide some constructive criticism on some of the issues I’ve personally faced and felt while using systemd-based machines.

The Journal: Very bad

journald is my least-favourite feature of systemd, bar none. While I understand the reasons why it was designed the way it was, I do not appreciate that it is the only way to log on a systemd system. Sure, you can ForwardToSyslog and set the journal to be in-memory-only with a small size, and pretend journald doesn’t exist. However, that is not only excess processor power and memory usage for negative gain, it’s also an additional attack surface. It would be great if there were a “stub” journald that was strictly a forwarder with no other code.

I am also unhappy with how the journal tries to “eat” core files. While the Linux default setting of “putting a file named ‘core’ in $CWD” is absolutely unusable for development and production, the weird mixture of FS and binary journal makes things needlessly complex. The documentation even explicitly calls out that core files may exist without corresponding journal entries, and journal entries may point to core files that no longer exist. Yet they use xattrs to put “some metadata” in the core files. Why not just have a sidecar file (maybe [core file name].info or .json or .whatever) that contains all the information from the journal, and have a single journal entry that points to that file if the administrator is interested in more information about the crash?

resolved: A solution looking for a problem

resolved might a decent idea on its own, but there are already other packages that can provide a local caching resolver without the many problems of resolved. Moreover, the very idea of a DNS resolver being part of “the system layer” seems ill-advised to me.

DNSSEC support is experimental and not handled correctly, and they readily admit that. It’s fine to know your limitations, but DNSSEC is something that is incredibly valuable to have on endpoints. I don’t really think resolved can be taken seriously without it. It’s beyond me how no one has contributed this feature to such a widely-used package.

There are odd issues with local domain search. This is made more complicated on home networks where a lot of what it does is overkill. On enterprise networks, it’s likely a bad fit anyway, which makes me question why it supports everything it does.

Lastly, and relatedly, in my opinion resolved tries to shoehorn too many odd features and protocols without having the basics done first. mDNS is better taken care of by a dedicated package like Avahi. LLMNR support has been deprecated by its creator Microsoft in favour of mDNS for over a year. As LLMNR has always been a security risk, I’m not sure why the support was added in the first place.

nspawn: Niche tool for niche uses

Any discussion including resolved would be remiss without mentioning the main reason it exists, and that is nspawn. It’s an interesting take on being “in between” chroot and a full container like Docker. It has niche uses, and I don’t have any real qualms with it, but I’ve never found it useful in any of my work so I don’t have a lot of experience with it. Usually when I am grabbing for chroot I want shared state between host and container, so nspawn wouldn’t make sense there. And when I grab for Podman, I want full isolation, which I feel more comfortable handing to a package that has more tooling around it.

Ancillary tools: Why in the system layer?

networkd is immature, doesn’t have a lot of support for advanced use cases, and has no GUI for end users. I don’t know why they want to stuff networking into the “system layer” when NetworkManager exists and keeps all the networking goop out of the system layer.

timedated seems like a cute way to allow users to change timezones via a PolicyKit action but otherwise seems like something that would be better taken care of by a “real” NTP client like Chrony or NTP. And again, I don’t know why it should live in the system layer.

systemd-boot only supports EFI, which makes it non-portable and inflexible. You won’t find EFI on Power or Z, and I have plenty of ARM boards that don’t support mainline U-Boot as well. This really isn’t a problem with systemd-boot, as it’s totally understandable to only want to deal with a single platform’s idiosyncrasies. What is concerning is the fact that distros like Fedora are pivoting away from GRUB in favour of it, which means they are losing even more portability.

In conclusion: A summary

What I really want to make clear with this article is:

  • I don’t blindly hate systemd, and in fact I really admire many of its qualities as an actual service manager. What I dislike is its attempt to take over what they term the “system layer”, when there are no alternatives available.
  • The problems I have with systemd are tangible and not just hand-wavy “Unix good, sysd bad”.
  • If there was an effort to have systemd separate from all of the other tentacles it has grown, I would genuinely push to have it be available as a service manager in Adélie. I feel that as a service manager – and only as a service manager – it would provide a fantastic user experience that cannot be rivaled by other existing solutions.

Thank you for reading. Have a great day, and please remember that behind every keyboard is a real person with real feelings.

The Sinking of the Itanic

Linux has officially had the Intel Itanium CPU architecture removed as of version 6.7 (currently unreleased). The Linux maintainers waited until the 6.6 Long Term series was released, so that users of Itanium systems would have one final LTS kernel with support for users who still desired it.

Most people don’t care a whole lot about this. A very few were happy about it, as now there is “one less old dead platform” in the Linux kernel. Some, however, were both concerned about those with remaining Itanium hardware and whether this signals an impending doom for those of us who care about other architectures.

I’d like to explore a bit about the Itanium processor, my personal feelings on this news, and my belief that this removal is not a harbinger of doom for any other architectures.

The Itanium wasn’t a typical CPU

First, let’s start with a primer on the Itanium itself. Most CPUs fall in to one of two categories: RISC, or CISC. These are “Reduced” instruction set computers, and “Complex” instruction set computers. They are named after the number of instructions that the computer understands at the lowest level.

A RISC CPU has basic operations like add, subtract, jump, and conditional. A CISC CPU has more rich operations that the CPU can do in a single operation, such as square root, binary-coded decimal, and others. This comes at the cost of extra power consumption and a much more complicated design of the chip.

Typical RISC systems that you may recognise include the PowerPC, SPARC, and MIPS. CISC systems include the Intel x86, Arm, and mainframes like System/z.

Itanium is neither CISC nor RISC. It is what is termed a “VLIW” or Very Long Instruction Word CPU. VLIW systems allow the programmer to specify things like parallelisation, instruction scheduling and retiring, and others. If these terms aren’t familiar to you, then you may already see the reason why VLIW systems aren’t popular. The expectation is that the compiler – or, at lower levels like boot loaders and compiler designs themselves, the human programmers – will perform the work of what most modern processors do in-hardware.

It was also termed an “EPIC” or Explicitly Parallel Instruction Computer, because each “slot” of the processor could actually be programmed at the same time in an assembly language stanza.

The only other “popular” VLIW systems are some graphics cards (which is why, for a time, they were the best at mining cryptocurrency) and Russia’s home-grown Elbrus architecture.

Compilers are still evolving in 2023 to handle the sorts of problems the Itanium brings to the forefront, with the goal of making code faster. The theory is that if compilers can output a more ideal ordering of instructions, code will execute faster on any architecture. However, the Itanium launched in 2001, before most compiler designers had even considered doing this sort of work.

Hardware dearth leads to port death

There are many CPU architectures in the world. I don’t personally believe Itanium is a signal that various other CPU architectures might be next for Linux’s chopping block. There are many reasons for this belief, but the most important one is that Itanium hardware has always been scarce.

At the start of the Itanium’s life, circa 2001, there were a few different vendors who shipped hardware with it. These were HP, SGI (which spun MIPS into its own company to focus on Itanium systems), and Dell. IBM did create a single Itanium-based system, but it was short-lived. Across its life, there were various other manufacturers that created a few systems. The main driver of Itanium was HP, who had a hand in creating the architecture and had to pay Intel a significant amount of money to keep producing it towards the end of its life.

Various statistics are available to show the truly surprising level of low uptake of the Itanium. Perhaps the most shocking is Gartner’s assessment in 2007 where there were 8.4 million x86s purchased, 417,000 RISCs (virtually all of them PowerPC and SPARC), and just 55,000 Itanium systems. 90% of those were from HP. HP’s offerings were very expensive, required long-term contracts, and were aimed firmly at large enterprises.

Now let us compare this with the architectures that I’ve seen the most worry for: SPARC and Alpha.

Sun sold over 500,000 SPARC systems in 1999-2000 alone, which may be more than all Itaniums that exist in the entire world right now.

It’s really hard to extrapolate sales figures for Alpha, but Compaq’s Q4’99 sales for only Western Europe were 245mm$ for Alpha. The highest priced AlphaServer I could find in Compaq’s 1999 catalogue was the ES40 6/667, at 48k$, but we’ll go ahead and double it to include potential support contracts and hardware upgrades. This leaves us with somewhere near 3,000 units shipped in a single quarter, only to Western Europe. We can assume that many businesses bought the lower end models, so these numbers are far smaller than reality. Realistically, I would assume Alpha probably sold about 100,000 units in 1999. Recall that HP’s best year of Itanium sales was 55,000 units.

Beyond that, let’s take a look at the used market. Linux contributors rarely work on these architecture ports using hardware they bought new 20+ years ago – they have used hardware that they enjoy using, and contribute with it.

Itanium systems are currently running somewhere between 600 USD and 2000 USD on eBay, with a few below 600. Most of the ones below 600 are either not working, or individual blades that must be installed to an HP BladeCenter rack system. This BladeCenter is a separate purchase, very large, and probably only usable in a real datacentre. There are also a few newer models above 2000 USD. There are 277 systems listed in the “Servers” (not “Parts”) category. The “largest” system I could find was with 4 GB RAM.

There are “more than 1,300” SPARC systems on offer on eBay, with the typical range being 100 to 300 USD. There are more costly examples, and Blade 2000/2500s (desktops with GPUs) are around 1000 USD.

There are 436 AlphaServers, and the average seems to run 400 to 1200 USD. Some of these systems have 8 GB RAM or more, and more of them seem to include seller-offered warranties than Itanium. And let us remember that Alpha was discontinued around the same time Itanium was newly introduced.

Genuine maintenance concerns

There are more than a few concerns about Itanium from a Linux kernel maintenance point-of-view. One of the most prominent is the EFI firmware. It is based on the older EFI 1.10 standard, which pre-dates UEFI 2.0 by some years and does not include a lot of the interfaces that UEFI does. By itself this isn’t a large concern, but to ensure the code is functional, it needs to be built and tested. There were simply not enough users to do this at a large enough scale. Developers wanted to work on EFI code, and did not have the ability to test on Itanium.

The architecture is different enough from any of the others that it requires special consideration for drivers, the memory manager, I/O handling, and other components. Typically, for architectures such as the Itanium, you really want one or more people who know a lot about the internals present and ready to test patches, answer questions, and participate in kernel discussions. This simply wasn’t happening any more. Intel washed their hands of the Itanium long ago, and HPE has focused on HP-UX and even explicitly marked Linux as deprecated on this hardware back in 2020.

The 68k has Amiga, Atari, and Mac communities behind it. The PowerPC is still maintained largely by IBM, even the older chips and systems. Fujitsu occasionally chimes in directly on SPARC, and there is an active community of users and developers keeping that port alive. There are a number of passionate people whether hobbyist or community-supported doing this necessary work for a number of other architectures.

Unfortunately, the Itanium just doesn’t have that organisation – and I still largely suspect that is due to a lack of hardware. There does seem to already be a small number of enthusiasts trying to save it, and I wish them the very best of luck. The Itanium is very interesting as a research architecture and can answer a lot of questions that I feel ISA and chip designers will have in the coming decades about different ways of thinking, and what works and what doesn’t work.

In conclusion

The Itanium was an odd fellow of a CPU architecture. It wasn’t widely adopted when it was around. It was discontinued by the final manufacturer in 2021. Examples for used equipment to purchase are not common and more expensive than other, better-supported architectures, which would be required to be able to maintain software for it.

While it is always disappointing when Linux drops support for an architecture, I don’t think the Itanium is some sort of siren call that implies more popular architectures will be removed. And I will note that virtually every architecture is more popular than the Itanium.

Looking forward to 2023

(Note: This draft was being written when that Monday Night Football incident happened, so it was shelved for a bit.)

As this is my last day of holiday break, I thought I’d reflect a bit on what makes me the most excited for the coming year. Obviously, none of us know what the future holds, but these are some of my hopes for 2023:

Social stuff

It looks like Twitter might survive after all, but the fragmentation and millions of people going to the Fediverse intrigues me. I am very curious to see where the Fediverse goes now that it has so much more interest. I am hoping to see people like journalists and meteorologists start using it in earnest, which were some of my favourite follows on Twitter. It would be great to see the platform grow to new interests, since the majority of people there lean towards being in tech.

While their privacy policies and business practices still disturb me, this year will likely be the year I rekindle my Facebook account. There are still family members and friends of mine that use it, and some pretty nifty retrocomputing groups are on it as well. If you can’t beat ‘em, join ‘em, I guess. Any content that I post on Facebook would be mirrored to better platforms, so it wouldn’t be anything special for those of you who want to continue to stay away. I just don’t want to miss out on those connections that I could have just because of my aversion to late-stage advertisement capitalism.

Apple ecosystem

My iPad Pro is going to be seeing more usage this year as Stage Manager is finally available, bringing multiple app/window support. This is something that I’ve personally felt has kept the iPad from living up to its full potential, and something I remember seeing being done in the jailbreak scene for years, so I’m quite happy to see Apple finally putting it in official system software.

While I know rumours abound and there is no reason to think it would be released this year, I’m eternally looking forward to a wearable – like, say, an Apple Watch – that can also function as a glucometer. As someone with type 1 diabetes, it’d be a real boon to be able to have enough a rough estimate of what my blood glucose level is without having to wear a separate sensor.

It would be very cool, though unlikely, to see a MacBook Pro with a Dynamic Island like the iPhone 14 Pro.

Retrocomputing

I’ve received a lot of goodies, hardware and software, over autumn and winter. I can’t wait to put them to good use in the Retro Lab. I’m hoping to write a number of new articles in my Retro Lab series.

There are a number of software development projects I’d like to tinker with in the retrocomputng circle. I’m keeping details vague for now, as I don’t want to make any promises, but my focus as always will be on making classic Macs and Windows NT useful in the modern era.

Linux and libre software

I’ve been following the SPDX project’s continual drive to make automated tooling around discovering and managing licenses of software packages. It would be very cool to integrate some of these tools into package managers like APK.

The Qt project is still not in my good graces after their decision to make LTS releases commercial-only. This only became stronger when it was announced qmlsc, the QML compiler that would make QML apps into high-performant, non-interpreted C++ apps, is also only available for commercial customers of Qt. Maybe the KDE team will support a libre Qt 6 LTS branch in the same way they support 5.15?

Speaking of LTS branches of things with major versions of 6, the Linux kernel 2023 LTS edition should be pretty exciting. Linux 6.1 and 6.2 bring a lot more support of AArch64 boards, including the Apple M1 and Qualcomm 8cx Gen 3. When the Linux 6 LTS drops, it will be very exciting to dual-boot mainline Linux on my MacBook Pro M1.

I am personally hoping to have some time to devote to “traditionally opposite” endian projects. Specifically, I want to see if I can bootstrap an aarch64_be environment on my Pine A64, and similarly bootstrap a ppc64el environment. There are probably going to be a lot of false assumptions in code regarding aarch64_be.

Adélie continues to improve regularly, and hopefully this will finally be the year of the release of Adélie Linux 1.0. Yes, I am taking on a somewhat more active role again, and no, I do not want to comment 😉

Lastly, it will be exciting to see where the GCC Rust front end goes. Hopefully this will lead to significant improvements in Rust’s bootstrap story, which will help make it more useful and approachable by people who cannot use, or do not want to trust, the Mozilla-provided binaries.

Personal

I want to take photography seriously again. Photography can tell a story, document history, and transport others to a new perspective. I really enjoy taking these kinds of photos and hope to have some great snapshots to share throughout the year.

In addition to the retrocomputing projects, there are a few others non-retro-related software development and library improvement projects that I hope to spend some time on this year. Some of them are Wayland on Power, Zig on big-endian Power, and adding better compression support to APK Tools.

In conclusion

That is an overview of what I hope to devote my time to in 2023. What do you think? Are there cool developments that I should be looking at that I missed? Are you excited about some of these too? Feel free to discuss in the comments!