Compiling XIBs with CMake without Xcode

I’ve been enjoying using the JetBrains IDE CLion to do some refactoring and improvements to the Auctions code base. However, when I tried to build the Mac app bundle with it, the app failed to launch:

2022-07-30 19:54:15.117 Auctions[80371:16543044] Unable to load nib file: Auctions, exiting

The XIB files were definitely part of the CMake project. I later learned that CMake does not automatically add XIB compilation targets to a project. It relies on the Xcode generator to do that.

I found a long-archived documentation page from CMake on the Kitware GitLab that described a method to build NIB files from XIBs, and have modified it to make it simpler for Auctions.

You can see the change in the commit diff, but I’ll include the snippet here for posterity.

First, you define an array with the XIB file names with no suffix. For instance, I’ve done set(COCOA_UI_XIBS AXAccountsWindow AXSignInWindow Auctions) for the three XIB files presently in the codebase.

Then we have the loop to build them:

find_program(IBTOOL ibtool REQUIRED)
foreach(XIBFILE ${COCOA_UI_XIBS})
add_custom_command(TARGET Auctions POST_BUILD
COMMAND ${IBTOOL} --compile ${CMAKE_CURRENT_BINARY_DIR}/Auctions.app/Contents/Resources/${XIBFILE}.nib ${CMAKE_CURRENT_SOURCE_DIR}/${XIBFILE}.xib
COMMENT "Compiling NIB file ${XIBFILE}.nib")
endforeach()

Now it starts correctly and works properly when built from within CLion. This was surprisingly difficult to debug and fix, so I hope this post can help others avoid the hours of dead ends that I endured.

Until next time, Happy Hacking!

Expanding the Retro Lab, and Putting It to Work

Over the past month, I have been blessed with being in the right place at the right time to acquire a significant amount of really cool computers (and other technology) for the Retro Lab.

Between the collection I already had and these new “hauls”, I now have a lot of computers. I was, ahem, encouraged to stop using the closets in my flat to store them and finally obtained a storage locker for the computers I’m not using. It’s close to home, so I can swap between what I want to work on virtually at will.

Now I am thinking about ways to track all of the machines I have. One idea I’ve had is to use FileMaker Pro for the Power Macintosh to track the Macs, and FoxPro to track the PCs. One of my best friends, Horst, suggested I could even use ODBC to potentially connect the two.

This led me to all sorts of ideas regarding ways to safely and securely run some server services on older systems and software. One of my acquisitions was a Tyan 440LX-based server board with dual Pentium II processors. I’m thinking this would be a fun computer to use for NT. I have a legitimate boxed copy of BackOffice Server 2.5 that would be perfect for it, even!

Connecting this system to the Internet, though, would present a challenge if I want to have any modicum of security – so I’ve thought it out. And this is my plan for an eventual “Retro Cloud”.

Being a cybersecurity professional, my first thought was to completely isolate it on the network. I can set up a VLAN on my primary router, and connect that VLAN to a dedicated secondary router. That secondary router would have total isolation from my present network, so the “Retro Cloud” would have its own subnet and no way to touch any other system. This makes it safer to have an outbound connection. I’ll be able to explore Gopherspace, download updates via FTP, and all that good stuff.

Next, I’m thinking that it would make a lot of sense to have updated, secure software to proxy inbound connections. Apache and Postfix can hand sanitised requests to IIS and Exchange without exposing their old, potentially vulnerable protocol handlers directly to the Internet.

And finally, as long as everything on the NT system is public knowledge anyway – don’t (re)use any important passwords on it, don’t have private data stored on it – the risk is minimal even if an attacker were able to gain access despite these protections.

I’m still in the planning stages with this project, so I would love to hear further comments. Has anyone else set up a retro server build and had success securing it? Are there other cool projects that I may not have even thought of yet? Share your comments with me below!

Wherefore art thou, USB-C hubs?

I’ve been looking for weeks at various stores around Tulsa, and online, for USB-C hubs. I already have a USB-C hub that has ports like Ethernet, HDMI, and USB-A. What I am looking for is a hub that has many USB-C ports.

As my Lightning cables age out, and I replace more equipment with devices that have only USB-C, more of my devices are connected this way.

My M1 MacBook Pro has two USB-C ports, but I have:

  • A USB-C SSD with my photo library.
  • My iPhone 12 with a Lightning to USB-C cable (all of my Lightning to USB-As are finally worn out).
  • My iPad mini which is USB-C to USB-C.
  • The aforementioned hub for connecting an external display.
  • Sometimes an optical drive, which yes, also uses USB-C.
  • The charging cable, because all of these devices pull a lot of power.

So, as a ballpark estimate, I need about six USB-C ports here. I really do not want to have to use a bunch of C-to-A adaptors, especially since some of my devices seem to slow down when using them. Has anyone seen anything like that out there? I drastically prefer to shop local, but at this point I would even consider buying from Amazon.

Or to put it in the words of one of my favourite bloggers: Dear lazyweb, where can I buy USB-C hubs?

The musl preprocessor debate

Today, I would like to discuss a project that I care very deeply about: the musl libc. One of the most controversial and long-standing debates in the musl community is that musl does not define a preprocessor macro.

What’s in a macro?

Simply put, preprocessor macros allow C code to build parts of itself conditionally. For example, the GNU libc defines the “__GLIBC__” macro. If your code needs to do something specific to function properly on systems using that library, it can conditionally build that code using “#ifdef __GLIBC__”.

The authors of musl have said that they will not add a preprocessor macro identifying the platform as musl because:

It’s a bug to assume a certain implementation has particular properties rather than testing.

Rich Felker, “Re: #define __MUSL__ in features.h”, 2013-03-29

I agree with this sentiment in theory, and in an idealised world this would hold up. However, I’d like to discuss why I think this may need to be reconsidered moving forward.

Sometimes you can’t test

One major reason this is an issue is that sometimes it is not possible to do what the authors consider the “correct” form of testing, which is compile-testing.

This practice requires you to build a small test program, determine whether it built properly, determine its runtime characteristics, and then use the results of that test to influence how your actual software is built. This is an alternative to using the conditional code with preprocessor macros.

However, there are many reasons you may not be able to successfully perform such testing. Cross compilation is a large gap here. In fact, many years ago when I was starting the Adélie project, this caused failures in the base image I was building.

The Bash shell could not perform any compile-time or run-time checks because it was being cross-compiled from a GNU libc system to a musl libc system. This caused it to use “fallback” code that worked improperly. If musl had defined a __MUSL__ macro, Bash would not have needed to assume it was running on a pre-POSIX system.

Similarly, the mailing list thread that made me feel strongly enough to write this article involves a header-only library. These types of libraries are meant to be “drop-in” and function without any changes to a developer’s build system. If header-only libraries start requiring you to use build-time tests, you lose the main reason to use them in the first place.

The author of this thread correctly points out that FreeBSD versions their API with a preprocessor macro. Any software that requires a certain API can simply ensure that __FreeBSD_version is defined as greater-or-equal than the versions that introduced that API.

The main reason that the musl project is fearful of this approach, at least to my observation, is that features or APIs (or indeed, bug fixes) can be backported to prior versions. I feel very strongly that this is not the responsibility of the libc.

If a distribution backports a feature, API, or patch to an older version of a library, it is that distribution’s responsibility to ensure that the software they build against it continues to function. When I backported an API from Qt 5.10 to 5.9 to ensure KDE continued building for Adélie, it was my responsibility as maintainer of those packages to keep them building properly. It certainly does not mean Qt should stop defining a preprocessor macro to determine the version being built against.

Additionally, some APIs are privileged. Determining whether these APIs work correctly using run-time testing can prevent CI/CD from working properly because the CI user does not have permission to use them.

A versioned macro like FreeBSD’s makes sense

I feel that the best way forward for musl is to define a macro like FreeBSD’s. It monotonically increases as APIs or features are added.

I agree that simple bug fixes, and even behavioural changes, probably should not be tracked with this macro. However, this would make it significantly easier to use new APIs as they are introduced.

It also makes builds more efficient. The cost of compile-time tests racks up quickly. On my POWER9 Talos workstation, typical ./configure runs take longer than the builds themselves. This is because fork+exec is still a slow path on POWER. It is similar on ARM, MIPS, and many other RISC architectures.

Macros like these don’t fully eliminate the need for ./configure, but they lessen the workload. Compile-time tests make sense for behaviour detection, but they do not make sense for API detection.