Dual-booting Windows 11 and Windows 7 on a Haswell

I have a Haskell-era MSI B85-G41 PC Mate motherboard and I decided to use it as a “mid-tier”-ish gaming PC and also as a TV set top box. I already had a WinTV-DCR-2650 dual-tuner CableCARD USB device, and I was gifted a Nvidia GeForce RTX 3070 for the project. The board had 32 GB RAM when I decommissioned it in 2019 as the Adélie x86_64 builder, so memory was not a concern.

My goal is to use Windows 11 for gaming, and Windows 7 Media Centre for the TV support (since Cox Oklahoma uses encryption for virtually all channels).

The problem is that Microsoft dropped support for Windows 7 long before this hardware existed, so it is difficult to boot on it. Also, Windows 11 doesn’t officially support Haswell, either.

Windows 11 was trivial to install in all honesty. I used Rufus to put the installer for Windows 11 on a USB disk, then followed the suggestions from this article in Tom’s Hardware and it installed quite nicely. It is performant, stable, and even still does Windows Update.

Windows 7 was significantly more difficult. I used Rufus again and ensured it used GPT and UEFI. It locked up early in boot. I found the UEFISeven project which seemed to make things somewhat better, but it never finished booting beyond “Starting Windows”. The Windows logo continued to pulse, but after 15 minutes I gave up. I found an issue on the UEFISeven tracker and despite my trepidation on running unknown binaries for booting, putting it in the USB stick managed to boot Windows 7’s installation environment successfully.

Next, while performing the installation, the system had a STOP 0x7E in HIDCLASS.SYS. This appears to be a very classic bug and it’s caused by using a Microsoft Wireless Keyboard/Mouse. (Irony as a MS hardware product crashes MS Windows…) Replacing them with (even more ironically) an Apple Pro Keyboard and Mouse allowed setup to continue.

The next problem was actually dual-booting. If I use the patched Windows 7 boot EFI application as BOOTMGFW.EFI, Windows 11 doesn’t boot; it seems to load all the files, but stays at a black screen. If I use Windows 11’s BOOTMGFW.EFI, Windows 7 no longer boots.

I’ve made a small batch script on the desktop of each one to reboot to the other. The 7->11 script renames BOOTMGFW.EFI to BOOTMGFW.7, then renames BOOTMGFW.11 to BOOTMGFW.EFI. The inverse is done for the 11->7 script. Note that you have to mount the ESP first, which is done (in both OSes) as “MOUNTVOL S: /S”. You can use any available drive letter.

I used LegacyUpdate.net to fetch and install all the needed updates for Windows 7. I still wouldn’t trust it unprotected on the “real” internet, but I’m comfortable enough with it sitting on my home network this way. Kudos to that team for making such a useful and valuable service for all retrocomputing enthusiasts!

Notes about the iBook G3 Clamshell

I’ve just repaired the hinge on my Indigo Clamshell. While I was in there, I also replaced the aging hard disk with a SD card adaptor. I wanted to write down a few notes about the process, both for posterity and so that others can benefit from my experience.

The standoffs for the hard disk caddy are brittle. I slightly over-tightened one and it snapped right off. Luckily, it snapped in a way that it would still stand solidly and hold the grounding wire of the charging board. When the Service Source manual says do not overtighten, it means it – as soon as there is the slightest resistance, stop: it’s tight.

I burned a copy of the iBook Software Restore CD from the fabulous archivists at the Garden, so that I could put the original software back on the SD card since it was empty. I used Verbatim CD-R 52x media and burned with an LG SP80NB80 on my Mac Studio.

The disc was readable by the iBook’s optical drive, but only barely; it took five minutes to show the Desktop. I’m not sure if it was the speed at which it was burned, the Verbatim media simply not agreeing with the iBook, or something about the power of the laser in the LG.

I regularly received “Some applications could not be quit.” when attempting to use Erase, and received “Restoring the software configuration iBook HD.img to volume Macintosh HD failed.” when attempting to use Restore.

I used my Power Mac G5 to read the CD and copy it to a USB key. Specifically, I used:

sudo dd if=/dev/disk3s1 of=/dev/disk2 bs=1048576

A mere 15 minutes later, I had a functional USB version of the iBook Software Restore. I then used a copy of Puma (Mac OS X 10.1.4) to install on the same partition, allowing me to dual-boot the system in both 9 and X. I have a second partition I plan to use to install Jaguar or Panther. I haven’t decided which one yet.

I’ll close with a photo of the iBook being a happy Puma. Until next time, be well!

My Indigo iBook G3 Clamshell, showing the introduction video from Mac OS X “Puma” 10.1.
Happy as a clam(shell)! 😁

The Sinking of the Itanic

Linux has officially had the Intel Itanium CPU architecture removed as of version 6.7 (currently unreleased). The Linux maintainers waited until the 6.6 Long Term series was released, so that users of Itanium systems would have one final LTS kernel with support for users who still desired it.

Most people don’t care a whole lot about this. A very few were happy about it, as now there is “one less old dead platform” in the Linux kernel. Some, however, were both concerned about those with remaining Itanium hardware and whether this signals an impending doom for those of us who care about other architectures.

I’d like to explore a bit about the Itanium processor, my personal feelings on this news, and my belief that this removal is not a harbinger of doom for any other architectures.

The Itanium wasn’t a typical CPU

First, let’s start with a primer on the Itanium itself. Most CPUs fall in to one of two categories: RISC, or CISC. These are “Reduced” instruction set computers, and “Complex” instruction set computers. They are named after the number of instructions that the computer understands at the lowest level.

A RISC CPU has basic operations like add, subtract, jump, and conditional. A CISC CPU has more rich operations that the CPU can do in a single operation, such as square root, binary-coded decimal, and others. This comes at the cost of extra power consumption and a much more complicated design of the chip.

Typical RISC systems that you may recognise include the PowerPC, SPARC, and MIPS. CISC systems include the Intel x86, Arm, and mainframes like System/z.

Itanium is neither CISC nor RISC. It is what is termed a “VLIW” or Very Long Instruction Word CPU. VLIW systems allow the programmer to specify things like parallelisation, instruction scheduling and retiring, and others. If these terms aren’t familiar to you, then you may already see the reason why VLIW systems aren’t popular. The expectation is that the compiler – or, at lower levels like boot loaders and compiler designs themselves, the human programmers – will perform the work of what most modern processors do in-hardware.

It was also termed an “EPIC” or Explicitly Parallel Instruction Computer, because each “slot” of the processor could actually be programmed at the same time in an assembly language stanza.

The only other “popular” VLIW systems are some graphics cards (which is why, for a time, they were the best at mining cryptocurrency) and Russia’s home-grown Elbrus architecture.

Compilers are still evolving in 2023 to handle the sorts of problems the Itanium brings to the forefront, with the goal of making code faster. The theory is that if compilers can output a more ideal ordering of instructions, code will execute faster on any architecture. However, the Itanium launched in 2001, before most compiler designers had even considered doing this sort of work.

Hardware dearth leads to port death

There are many CPU architectures in the world. I don’t personally believe Itanium is a signal that various other CPU architectures might be next for Linux’s chopping block. There are many reasons for this belief, but the most important one is that Itanium hardware has always been scarce.

At the start of the Itanium’s life, circa 2001, there were a few different vendors who shipped hardware with it. These were HP, SGI (which spun MIPS into its own company to focus on Itanium systems), and Dell. IBM did create a single Itanium-based system, but it was short-lived. Across its life, there were various other manufacturers that created a few systems. The main driver of Itanium was HP, who had a hand in creating the architecture and had to pay Intel a significant amount of money to keep producing it towards the end of its life.

Various statistics are available to show the truly surprising level of low uptake of the Itanium. Perhaps the most shocking is Gartner’s assessment in 2007 where there were 8.4 million x86s purchased, 417,000 RISCs (virtually all of them PowerPC and SPARC), and just 55,000 Itanium systems. 90% of those were from HP. HP’s offerings were very expensive, required long-term contracts, and were aimed firmly at large enterprises.

Now let us compare this with the architectures that I’ve seen the most worry for: SPARC and Alpha.

Sun sold over 500,000 SPARC systems in 1999-2000 alone, which may be more than all Itaniums that exist in the entire world right now.

It’s really hard to extrapolate sales figures for Alpha, but Compaq’s Q4’99 sales for only Western Europe were 245mm$ for Alpha. The highest priced AlphaServer I could find in Compaq’s 1999 catalogue was the ES40 6/667, at 48k$, but we’ll go ahead and double it to include potential support contracts and hardware upgrades. This leaves us with somewhere near 3,000 units shipped in a single quarter, only to Western Europe. We can assume that many businesses bought the lower end models, so these numbers are far smaller than reality. Realistically, I would assume Alpha probably sold about 100,000 units in 1999. Recall that HP’s best year of Itanium sales was 55,000 units.

Beyond that, let’s take a look at the used market. Linux contributors rarely work on these architecture ports using hardware they bought new 20+ years ago – they have used hardware that they enjoy using, and contribute with it.

Itanium systems are currently running somewhere between 600 USD and 2000 USD on eBay, with a few below 600. Most of the ones below 600 are either not working, or individual blades that must be installed to an HP BladeCenter rack system. This BladeCenter is a separate purchase, very large, and probably only usable in a real datacentre. There are also a few newer models above 2000 USD. There are 277 systems listed in the “Servers” (not “Parts”) category. The “largest” system I could find was with 4 GB RAM.

There are “more than 1,300” SPARC systems on offer on eBay, with the typical range being 100 to 300 USD. There are more costly examples, and Blade 2000/2500s (desktops with GPUs) are around 1000 USD.

There are 436 AlphaServers, and the average seems to run 400 to 1200 USD. Some of these systems have 8 GB RAM or more, and more of them seem to include seller-offered warranties than Itanium. And let us remember that Alpha was discontinued around the same time Itanium was newly introduced.

Genuine maintenance concerns

There are more than a few concerns about Itanium from a Linux kernel maintenance point-of-view. One of the most prominent is the EFI firmware. It is based on the older EFI 1.10 standard, which pre-dates UEFI 2.0 by some years and does not include a lot of the interfaces that UEFI does. By itself this isn’t a large concern, but to ensure the code is functional, it needs to be built and tested. There were simply not enough users to do this at a large enough scale. Developers wanted to work on EFI code, and did not have the ability to test on Itanium.

The architecture is different enough from any of the others that it requires special consideration for drivers, the memory manager, I/O handling, and other components. Typically, for architectures such as the Itanium, you really want one or more people who know a lot about the internals present and ready to test patches, answer questions, and participate in kernel discussions. This simply wasn’t happening any more. Intel washed their hands of the Itanium long ago, and HPE has focused on HP-UX and even explicitly marked Linux as deprecated on this hardware back in 2020.

The 68k has Amiga, Atari, and Mac communities behind it. The PowerPC is still maintained largely by IBM, even the older chips and systems. Fujitsu occasionally chimes in directly on SPARC, and there is an active community of users and developers keeping that port alive. There are a number of passionate people whether hobbyist or community-supported doing this necessary work for a number of other architectures.

Unfortunately, the Itanium just doesn’t have that organisation – and I still largely suspect that is due to a lack of hardware. There does seem to already be a small number of enthusiasts trying to save it, and I wish them the very best of luck. The Itanium is very interesting as a research architecture and can answer a lot of questions that I feel ISA and chip designers will have in the coming decades about different ways of thinking, and what works and what doesn’t work.

In conclusion

The Itanium was an odd fellow of a CPU architecture. It wasn’t widely adopted when it was around. It was discontinued by the final manufacturer in 2021. Examples for used equipment to purchase are not common and more expensive than other, better-supported architectures, which would be required to be able to maintain software for it.

While it is always disappointing when Linux drops support for an architecture, I don’t think the Itanium is some sort of siren call that implies more popular architectures will be removed. And I will note that virtually every architecture is more popular than the Itanium.

Expanding the Retro Lab, and Putting It to Work

Over the past month, I have been blessed with being in the right place at the right time to acquire a significant amount of really cool computers (and other technology) for the Retro Lab.

Between the collection I already had and these new “hauls”, I now have a lot of computers. I was, ahem, encouraged to stop using the closets in my flat to store them and finally obtained a storage locker for the computers I’m not using. It’s close to home, so I can swap between what I want to work on virtually at will.

Now I am thinking about ways to track all of the machines I have. One idea I’ve had is to use FileMaker Pro for the Power Macintosh to track the Macs, and FoxPro to track the PCs. One of my best friends, Horst, suggested I could even use ODBC to potentially connect the two.

This led me to all sorts of ideas regarding ways to safely and securely run some server services on older systems and software. One of my acquisitions was a Tyan 440LX-based server board with dual Pentium II processors. I’m thinking this would be a fun computer to use for NT. I have a legitimate boxed copy of BackOffice Server 2.5 that would be perfect for it, even!

Connecting this system to the Internet, though, would present a challenge if I want to have any modicum of security – so I’ve thought it out. And this is my plan for an eventual “Retro Cloud”.

Being a cybersecurity professional, my first thought was to completely isolate it on the network. I can set up a VLAN on my primary router, and connect that VLAN to a dedicated secondary router. That secondary router would have total isolation from my present network, so the “Retro Cloud” would have its own subnet and no way to touch any other system. This makes it safer to have an outbound connection. I’ll be able to explore Gopherspace, download updates via FTP, and all that good stuff.

Next, I’m thinking that it would make a lot of sense to have updated, secure software to proxy inbound connections. Apache and Postfix can hand sanitised requests to IIS and Exchange without exposing their old, potentially vulnerable protocol handlers directly to the Internet.

And finally, as long as everything on the NT system is public knowledge anyway – don’t (re)use any important passwords on it, don’t have private data stored on it – the risk is minimal even if an attacker were able to gain access despite these protections.

I’m still in the planning stages with this project, so I would love to hear further comments. Has anyone else set up a retro server build and had success securing it? Are there other cool projects that I may not have even thought of yet? Share your comments with me below!