Ah, wonderful health hazards

I can’t tell what has been overall worse for my health in the past few weeks. The bathroom connected to my home office directly sits over the complex’s “laundromat station”. This did not used to bother me. In fact, I was quite okay with this, because it means I have the closest walking distance of any of my neighbours to it. However, for the past two or three weeks, I can smell — from the office, mind — a very strong odour of laundry detergent every time someone does a load. Turns out a lot of people do loads in the 18:00 to 21:00 time slot on weekdays, which happens to be when I am at my most productive in my office. I cannot imagine this is at all healthy for me.

But then I remember I’ve spent every day since Saturday spending multiple hours trying to set up OpenLDAP for new project. I’ve always just used Active Directory on the server-side, so my only experience thus far with OpenLDAP has been client-side. It’s a great client library with easy configuration and a great debug mode that will tell you exactly what is happening and what is going wrong. Unfortunately, the server part, at least on Debian, uses “dynamic configuration” which means everything is in LDAP.

Now, look, LDIF and LDAP are fine and great for phone book-style records. It makes perfect sense. That is what it was designed to do. Storing regexp in ASN.1 BER is pushing it. But the way they do HDB/MDB grouping feels to me like trying to fit in with all those cool kids with their NoSQL and their MapReduce and their terrible terribly-great performance by using “shards” everywhere. And our leader wants replication so that it’s fault tolerant. Now I get to convert decades-old documentation about an “enterprise” feature to this “dynamic configuration” thing. I cannot imagine this is at all healthy for me.

Configuring OpenLDAP to authenticate using X.509 client certificates

This is not meant to be a comprehensive guide by any means, but information on the Web for configuring OpenLDAP to authenticate using X.509 client certificates is lacking. And in some cases, over a decade old! It took me hours to find the documentation I needed, but only minutes to see it working once I had the correct “recipe”.

You should probably be running your own Certificate Authority for the purpose of generating client certificates, especially since you need one per user. You can lock it up tightly and only use it for the purposes of LDAP if you like. You can also use a certificate vendor like Thawte or GeoTrust or Comodo. Make sure you pick just one, though, because you will configure OpenLDAP to trust only that single CA to sign all the relevant client certificates. (This ensures that nobody can come in with a forged certificate signed by another vendor, or a self-signed one.)

The Ubuntu guide on making a CA is pretty decent, though unfortunately it uses the inferior GnuTLS package. That’s okay, because we are only using it for OpenLDAP. And actually, you can’t use OpenSSL generated certificates on Debian’s OpenLDAP because they patched it in such a way that the certificates cannot be read. (There are conflicting reports on whether this bug was fixed or not upstream.) Note that you definitely want to set a higher expiration_days than the default 365! 10 or even 15 years isn’t unheard of, which is 5475 days if you were wondering.

Once you have either created your CA, or decided on a vendor, you may begin configuring OpenLDAP. Replace authority.pem with the file name for your CA’s root certificate, and ldap_cert.pem and ldap_key.pem for the server certificate and its private key. Note that the server certificate must have the FQDN of the LDAP server as its only CN. It may have a wildcard as a subjectAltName (or SAN) but the FQDN (normally something like ldap01.myproject.org) must be the CN.

With slapd.conf

TLSCACertificateFile /etc/ssl/certs/authority.pem
TLSCertificateFile /etc/ssl/certs/ldap_cert.pem
TLSCertificateKeyFile /etc/ssl/private/ldap_key.pem
TLSVerifyCert try

With Dynamic Configuration, aka cn=config, aka “OLC”/on-line configuration, aka …

dn: cn=config
changetype: modify
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/ssl/certs/authority.pem
-
add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/ssl/certs/ldap_cert.pem
-
add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/ssl/private/ldap_key.pem
-
add: olcTLSVerifyCert
olcTLSVerifyCert: try

Note that if you receive an error such as:

ldap_sasl_interactive_bind_s: Unknown authentication method (-6)
 additional info: SASL(-4): no mechanism available:

then you most likely forgot the olcTLSVerifyCert like I did the first time 🙂 Note that there is nothing printed after “no mechanism available: “. That was the hardest part to debug! Hopefully this can help a few people out.

Also note that for client certificates to work correctly, the DN of the X.509 certificate must exactly match the DN of the LDAP object. If you cannot meet that requirement, you will need to look at authz-regexp: for cn=config, see this mailing list posting, and for standard configuration see the documentation. Note that I was unsuccessful in making this seemingly-useful feature work, but you may have better luck than I did.

References

Let’s Encrypt and why I still pay for TLS certificates

I am asked with alarming regularity why I am not using Let’s Encrypt for Web sites. This article answers that question.

I am asked with alarming regularity why I am not using Let’s Encrypt for my personal Web sites, and for Adélie‘s site, and for my mother’s art gallery site, and so on. “Why do you pay money for something you could have for free? And then you aren’t giving money to those evil CAs!”

TLS certificates are still very much “you get what you pay for”. Let’s Encrypt is free, and on paper it seems to be a great solution with roots in freedom and socialism. However, it has a number of large issues in practice that prevent me from being able to adopt it.

The first, and most evident, is the failure of the community to provide a single ACME client that is well-supported and provides configuration options. As of this writing, there are 49 different client implementations on the official site. The problems with them are as numerous as the offerings; my main complaint is that most of them require themselves to run as the root user to automatically write to sensitive certificate files that are owned by the Web server user and are chmod 400.

The second large issue I’ve seen is that most of these ‘automatic updates’ break. This can be due to administrator error – and since there is not one single option, there cannot be a single repository of knowledge. This can also be due to APIs or endpoints changing. I have seen an official Mozilla blog and Void Linux’s repository broken in the last week alone, all by botched ACME cron jobs. This solution is sold as “set and forget”, but it requires more effort than simply going to a site every year and inputting a CSR and privkey.

Other issues with Let’s Encrypt include: Let’s Encrypt lacks a “site seal” which is very important on e-commerce sites to foster user trust. Let’s Encrypt does not provide OV (let alone EV), which also compromises trust in people who know what to look for.

All in all, I think going forward Let’s Encrypt may be suitable for power users and people who run TLS servers off their home servers. It may even be suitable for some personal sites and blogs. But I don’t think it is a long-term solution for person who need trust, or those who have a complicated infrastructure (such as a distro, like Adélie).

Going IPv6 native without IPv4

Now that I have finally moved in to my new apartment (which requires a long blog of its own), I have new routing equipment and a new network infrastructure. The native IPv6 on Cox Communications seems to be a bit better than the native IPv6 offered by Comcast Business; namely, Cox seems to be peered more widely and therefore ping times are much lower. Of course, this could be specific to the market I’m in – eastern Oklahoma – so YMMV.

However, because DHCP is a terrible protocol, it is constantly flaking, leaving me with IPv6-only access to the Internet. That is, no access to IPv4 whatsoever. Surprisingly, it’s nearly usable. However, I am highly disappointed in a few surprises I’ve found that do not work over IPv6:

  • EVERY SINGLE CODE HOSTING SERVICE ON THE INTERNET. This really, really, really, really upsets me. Luckily, I don’t have to care any more, because I run my own now.
  • DuckDuckGo. I am incredulous that a modern search engine is not accessible over IPv6.
  • eBay and PayPal. This isn’t really surprising, I suppose, since eBay were running Windows NT 4 as recently as 2006… they always have been a decade off of the current technologies.
  • Any news Web site I tried: Bloomberg, BBC, New York Times, Washington Post.
  • The entire StackExchange family of properties, five YEARS after being asked for even a trial of IPv6 access. This is entirely unacceptable. I expect news organisations and e-commerce conglomerates to be woefully behind the times, but a company designed from the ground up for computer scientists by computer scientists? I can’t believe this is real.
  • Weather.gov. The US government actually has an IPv6 project with real time online completion progress, even available itself via IPv6.  However, while NOAA’s flashy Web 3.0 marketing pages are available over IPv6, the important research, life-saving data, and forecast information made by the National Weather Service are entirely IPv4-only. I understand that internally, their infrastructure is not entirely ready for IPv6, but they should be able to run the main radar and warning information over IPv6 at least. Americans need not feel singled out, though; the UK’s Met Office is also unavailable over IPv6.

At least Wikipedia and the Google properties are usable, so I have music, videos, and a reference library.